Tuesday, January 10, 2012

Sqoop export and import commands

Sqoop Import Examples:
Sqoop Import :- Import data from a relational database management system (RDBMS) such as MySQL or Oracle into the Hadoop Distributed File System (HDFS) and its subprojects (Hive, HBase).


Import the data (MySQL table) to HBase:

Case 1: If table have primary key and import all the column of MySQL table into HBase table.

$ bin/sqoop import --connect jdbc:mysql://localhost/db1 --username root --password root --table tableName --hbase-table hbase_tableName  --column-family hbase_table_col1 --hbase-create-table

Case 2: If table have primary key and import only few columns of MySQL table into HBase table.  

$ bin/sqoop import --connect jdbc:mysql://localhost/db1 --username root --password root --table tableName --hbase-table hbase_tableName --columns column1,column2 --column-family hbase_table_col1 --hbase-create-table

Note : Column names specified in --columns attribute must contain the primary key column.

Case 3: If table doesn't have primary key then choose one column as a hbase-row-key. Import all the column of MySQL table into HBase table.

$ bin/sqoop import --connect jdbc:mysql://localhost/db1 --username root --password root --table tableName --hbase-table hbase_tableName --column-family hbase_table_col1 --hbase-row-key column1 --hbase-create-table

Case 4: If table doesn't have primary key then choose one column as a hbase-row-key. Import only few columns of MySQL table into HBase table.

$ bin/sqoop import --connect jdbc:mysql://localhost/db1 --username root --password root --table tableName --hbase-table hbase_tableName --columns column1,column2 --column-family hbase_table_col --hbase-row-key column1 --hbase-create-table 

Note: Column name specified in hbase-row-key atribute must be in columns list. Otherwise command will execute successfully but no records are inserted into hbase.


Note : The value of primary key column or column specified in --hbase-row-key attribute become the HBase row value. If MySQL table doesn't have primary key or column specified in --hbase-row-key attribute doesn't have unique value then there is a lost of few records.

Example : Let us consider a MySQL table test_table which have two columns name,address. The table test_table doesn't have primary key or unique key column.

Records of test_table:
________________
name    address
----------------
abc    123
sqw    345
abc    125
sdf    1234
aql    23dw


Run the following command to import test_table data into HBase:

$ bin/sqoop import --connect jdbc:mysql://localhost/db1 --username root --password root --table test_table --hbase-table hbase_test_table --column-family test_table_col1 --hbase-row-key name --hbase-create-table

Only 4 records are visible into HBase table instead of 5. In above example two rows have same value 'abc' of name column and value of this column is used as a HBase row key value. If record having value 'abc' of name column come then thoes record will inserted into HBase table. Next time, another record having the same value 'abc' of name column come then thoes column will overwrite the value previous column.

Above problem also occured if table have composite primary key because the one column from composite key is used as a HBase row key.

Import the data (MySQL table) to Hive

Case 1: Import MySQL table into Hive if table have primary key.

bin/sqoop-import  --connect jdbc:mysql://localhost:3306/db1 -username root -password password --table tableName  --hive-table tableName --create-hive-table --hive-import --hive-home path/to/hive_home

Case 2: Import MySQL table into Hive if table doesn't have primary key.

$ bin/sqoop-import  --connect jdbc:mysql://localhost:3306/db1 -username root -password password --table tableName  --hive-table tableName --create-hive-table --hive-import --hive-home path/to/hive_home --split-by column_name

or

$ bin/sqoop-import  --connect jdbc:mysql://localhost:3306/db1 -username root -password password --table tableName  --hive-table tableName --create-hive-table --hive-import --hive-home path/to/hive_home -m 1



Import the data (MySQL table) to HDFS


Case 1: Import MySQL table into HDFS if table have primary key.

$ bin/sqoop import -connect jdbc:mysql://localhost:3306/db1 -username root -password password --table tableName --target-dir /user/ankit/tableName

Case 2: Import MySQL table into HDFS if table doesn't have primary key.

$ bin/sqoop import -connect jdbc:mysql://localhost:3306/db1 -username root -password password --table tableName --target-dir /user/ankit/tableName  -m 1



Sqoop Export Examples:

Sqoop Export: export the HDFS and its subproject (Hive, HBase) data back into an RDBMS. 

Export Hive table back to an RDBMS:

By default, Hive will stored data using ^A as a field delimiter and \n as a row delimiter.

$ bin/sqoop export --connect jdbc:mysql://localhost/test_db --table tableName  --export-dir /user/hive/warehouse/tableName --username root --password password -m 1 --input-fields-terminated-by '\001'

where '\001' is octal representation of ^A.


77 comments:

Raghu Nittala said...

Hi,

Thanks for the article.When I tried Sqoop import to hive, everything went well, the import is successful. But when I say Show Tables in hive, I am not able to see the tables. I can see the tables in HDFS. How can I access HDFS tables using HiveQL

Ankit Jain said...

Hi Raghu,

Create an external table into Hive:

Example: -
create external table TableName (col_1 datatype, col_2 datatype) LOCATION 'path/to/table/in/hdfs';

Unknown said...
This comment has been removed by the author.
Unknown said...

Hi Ankit.

Thanks for this blog,

When I tried the sqoop-import by this command.

sqoop import --connect jdbc:mysql://localhost:3306/Demo --username root --password root --table Class --target-dir /HADOOP/hadoop-0.20.2

I got this error, Why is it so i have given correct user name and password of the Mysql user (User is root n password is also root, under connection test )

ERROR manager.SqlManager: Error executing statement: java.sql.SQLException: Access denied for user 'root'@'localhost' (using password: YES)
java.sql.SQLException: Access denied for user 'root'@'localhost' (using password: YES)


Please suggest

Ankit Jain said...

Hi Yogesh,

Please grant proper privileges to root user.

Open the mysql console and run the following command:
mysql>CREATE USER 'root'@'localhost' IDENTIFIED BY 'root';
mysql>GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost'
WITH GRANT OPTION;
mysql> flush privileges;

This issue is regarding to mysql not sqoop.

Unknown said...

Hello Ankit,

Thank you very much, It worked. Thanks a lot.

One more thing I am facing when I import data from RDBMS to hive ( Rdbms to HDFS n Hbase working well)


I used this command

sqoop import --connect jdbc:mysql://localhost:3306/Demo --username sqoop1 --password SQOOP1 -table newone --hive-table newhive --create-hive-table --hive-import --target-dir /user/hive/warehouse/neww


it shows this error only with hive.

ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Cannot run program "hive": error=2, No such file or directory

Please suggest why is it so, and you have also mentioned

hive-home path/to/hive_home (for what it stands?)

Ankit Jain said...

hive-home path/to/hive_home - means path to Hive package:

Eg: /home/$USER/Hive-0.7

Unknown said...

Thanks a million Ankit for such a great blog, It worked on ubuntu. :-)

Thank you very much
please keep updating your blog to help all :-)

Unknown said...

Hi Ankit,

An urgent request and little bit emergency as well, am new to Hbase, could please tell how to export the entire Hbase data into an MYSQL DB.

it will be great help, if you can give quick reply

ThanksInadvance
Mohan

Anonymous said...

Thanks Ankit. Realy works.

Amit Patel said...

]bin/sqoop import -connect jdbc:mysql://localhost:3306/ChleonCloudVault_VER_2 -username root -password root --table City --target-dir /home/hdfs/hadoop/

ERROR

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.getInstances(Ljava/lang/String;Ljava/lang/Class;)Ljava/util/List;
at com.cloudera.sqoop.tool.SqoopTool.loadPlugins(SqoopTool.java:141)
at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:211)
at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:230)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:239)

Amit Patel said...

]bin/sqoop-import --connect jdbc:mysql://localhost:3306/ChleonCloudVault_VER_2 -username root -password root --table City --hive-table city --create-hive-table --hive-import --hive-home /home/hdfs/hive

ERROR

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.getInstances(Ljava/lang/String;Ljava/lang/Class;)Ljava/util/List;
at com.cloudera.sqoop.tool.SqoopTool.loadPlugins(SqoopTool.java:141)
at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:211)
at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:230)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:239)

pravss4u said...

sqoop export --connect jdbc:mysql://ht-vhacom4.virtusa.com/test --username irm --password irm --table sqoop_sample --export-dir pravallika_sqoop/sqoop.txt --input-fields-terminated-by '|'


when im exporting from hdfs to mysql db im getting exception as


2/10/11 17:47:41 INFO mapred.JobClient: Task Id : attempt_201210081105_0380_m_000000_0, Status : FAILED
java.util.NoSuchElementException
at java.util.AbstractList$Itr.next(AbstractList.java:350)
at sqoop_sample.__loadFromFields(sqoop_sample.java:621)
at sqoop_sample.parse(sqoop_sample.java:575)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:77)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:36)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
at org.apache.hadoop.mapred.Child.main(Child.java:264)

12/10/11 17:47:49 INFO mapred.JobClient: Task Id : attempt_201210081105_0380_m_000000_1, Status : FAILED
java.util.NoSuchElementException
at java.util.AbstractList$Itr.next(AbstractList.java:350)
at sqoop_sample.__loadFromFields(sqoop_sample.java:621)
at sqoop_sample.parse(sqoop_sample.java:575)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:77)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:36)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
at org.apache.hadoop.mapred.Child.main(Child.java:264)

12/10/11 17:47:56 INFO mapred.JobClient: Task Id : attempt_201210081105_0380_m_000000_2, Status : FAILED
java.util.NoSuchElementException
at java.util.AbstractList$Itr.next(AbstractList.java:350)
at sqoop_sample.__loadFromFields(sqoop_sample.java:621)
at sqoop_sample.parse(sqoop_sample.java:575)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:77)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:36)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:647)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
at org.apache.hadoop.mapred.Child.main(Child.java:264)


pravss4u said...

my hdfs file information is like this

Wed, 26 Sep 2012 05:45:47 +0000|SamiUsamaHameed|313864835|313864835|Usama Hameed Sami|null|250833525861908480|250833525861908480|en|recent|http://a0.twimg.com/profile_images/2187701082/Photo0093_normal.jpg|https://si0.twimg.com/profile_images/2187701082/Photo0093_normal.jpg|<a href="http://twitter.com/">web</a>|President #Obama saying antiIslamic blasphemous film can't be banned under 1st amendment of US Constitution is seriously sick N humiliating.|null|0|0|null
i have all the columns in my mysql db, dont know why i cant able to export?

please help anyone

thanks in advance
pravallika

Ankit Jain said...

Hi,

This error "2/10/11 17:47:41 INFO mapred.JobClient: Task Id : attempt_201210081105_0380_m_000000_0, Status : FAILED" usually occurred in following cases:

1. Field delimiter is not correct.
2. Does not have expected number of columns in exported database.
3. Check the datatype of columns.


Ankit Jain said...

Hi Amit,

I think your Sqoop and hadoop versions are not compatible.

Unknown said...

Hi,

I am trying to run hive wi but its showing FATAL,

I have used this command

hive --service hwi

but it shows..

hadoop@padhu-System-Product-Name:/opt/hive$ bin/hive --service hwi
13/01/24 22:33:37 INFO hwi.HWIServer: HWI is starting up
13/01/24 22:33:37 FATAL hwi.HWIServer: HWI WAR file not found at /opt/hive/lib/hive-hwi-0.8.1.war

although my .war file lies in the same location,


please suggest how to get rid out of this error and access HWI interface.

NOTE: this is what i have in my hive-site.xml file under hive/conf directory



hive.hwi.listen.host
0.0.0.0
This is the host address the Hive Web Interface will listen on



hive.hwi.listen.port
9999
This is the port the Hive Web Interface will listen on



hive.hwi.war.file
/opt/hive/lib/hive-hwi-0.8.1.war
This is the WAR file with the jsp content for Hive Web Interface


Neeraj Kumar said...

How to export HBase table data into a Mysql table ??

Baskar said...

Hi Guyz.. I need to EXPORT a csv file from hdfs directory to ORACLE DB. Please help me out on this..
sqoop export --options-file oracle_db_connect.txt --table stitem_sls_d --export-dir /user/root/testdata/sqoop_test/ --input-fields-terminated-by ','

Anonymous said...

Hi,

Please confirm me, whether we can import the database dump to the HDFS/HIVE/HBASE. My concern is I want to import the db objects like triggers, indexes so on. Not just tables and views.


Also I want to know with out creating the table at the db, how do I export the data???

Anonymous said...

Ankit can you please respond to my prev query of importing the other DB objects to sqoop.

Also about restoring the DB structure or schema when i migrate to different db.

Unknown said...

Hi Ankit,

My requirement is :I have 3 csv files in table.
These 3 csv files are having 3 columns like one.csv in First Column , two.csv file in Second Column
and three.csv file in Third column .this is done in jdbc code in java.

Form this table(data base is my sql) now I am trying put these 3 csv files into HSFS using sqoop.

and how to save these files which in format into hdfs.

I am new to HDFS.Please suggest me with help of code

Regards,
Kishore.E

Kani said...

Ankit

When I export file from HDFS to MySQL, entire row goes into a single column. I wanted each HDFS field to go into a separate MySQL column. It is not working. Also what is the ocatal representation of line delimiter /n

Thanks,

Anonymous said...

Hi,Ankit.
When I import a big table which has 10000W rows of data to hive (sqoop import --connect jdbc:oracle:thin:@192.168.1.111:1521:ORCL --username USER1 --password user1 --table TEST_10000W --hive-import -m 1),I came across the following problem:13/05/21 09:34:01 INFO mapreduce.ImportJobBase: Retrieved 100000000 records.
13/05/21 09:49:28 ERROR manager.SqlManager: Error executing statement: java.sql.SQLException: Io exception: Connection timed out
java.sql.SQLException: Io exception: Connection timed out
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:113)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:147)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:257)
at oracle.jdbc.driver.T4CConnection.doRollback(T4CConnection.java:593)
at oracle.jdbc.driver.PhysicalConnection.rollback(PhysicalConnection.java:1214)
at org.apache.sqoop.manager.OracleManager$ConnCache.getConnection(OracleManager.java:194)
at org.apache.sqoop.manager.OracleManager.makeConnection(OracleManager.java:288)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:525)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:548)
at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:191)
at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:175)
at org.apache.sqoop.hive.TableDefWriter.getCreateTableStmt(TableDefWriter.java:126)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:174)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:222)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:231)
at org.apache.sqoop.Sqoop.main(Sqoop.java:240)
13/05/21 09:49:28 INFO manager.OracleManager: Time zone has been set to GMT
13/05/21 09:49:28 ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.NullPointerException
java.lang.NullPointerException
at org.apache.sqoop.hive.TableDefWriter.getCreateTableStmt(TableDefWriter.java:171)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:174)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:222)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:231)
at org.apache.sqoop.Sqoop.main(Sqoop.java:240)

Thanks!
ps:It works ok when the table only has 100W rows

Anonymous said...

The problem "Connection timed out" is already resolved .Thanks all the same .It's because that the connection is invalid after a long time unused.
Solutions:Check the connection cached before using it,if invalid,make a new one .

Anonymous said...

When I export file from HDFS to MySQL, entire row goes into a single column. I wanted each HDFS field to go into a separate MySQL column. It is not working. Also what is the ocatal representation of line delimiter /n

super said...

in my case, i struct hive import issue.
namenode = missasia
datanode = amber

i got error like that.

Note: Recompile with -Xlint:deprecation for details.
13/08/12 01:44:14 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/4adbd31e954ab64f0a94c62330020199/job_info.jar
13/08/12 01:44:14 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import
13/08/12 01:44:14 INFO mapreduce.ImportJobBase: Beginning import of job_info
13/08/12 01:44:16 INFO mapred.JobClient: Running job: job_201308120059_0010
13/08/12 01:44:17 INFO mapred.JobClient: map 0% reduce 0%
13/08/12 01:44:24 INFO mapred.JobClient: map 100% reduce 0%
...............................
13/08/12 01:44:25 INFO mapred.JobClient: Map output records=15
13/08/12 01:44:25 INFO mapred.JobClient: SPLIT_RAW_BYTES=87
13/08/12 01:44:25 INFO mapreduce.ImportJobBase: Transferred 1.3633 KB in 10.5555 seconds (132.2539 bytes/sec)
13/08/12 01:44:25 INFO mapreduce.ImportJobBase: Retrieved 15 records.
13/08/12 01:44:25 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `job_info` AS t LIMIT 1
13/08/12 01:44:25 WARN hive.TableDefWriter: Column reg_date had to be cast to a less precise type in Hive
13/08/12 01:44:25 WARN hive.TableDefWriter: Column end_date had to be cast to a less precise type in Hive
13/08/12 01:44:25 INFO hive.HiveImport: Removing temporary files from import process: hdfs://missasia:9000/user/hive/warehouse/_logs
13/08/12 01:44:25 INFO hive.HiveImport: Loading uploaded data into Hive
13/08/12 01:44:26 INFO hive.HiveImport: 13/08/12 01:44:26 WARN conf.Configuration: DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated. Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml, mapred-default.xml and hdfs-default.xml respectively
13/08/12 01:44:27 INFO hive.HiveImport:
13/08/12 01:44:27 INFO hive.HiveImport: Logging initialized using configuration in file:/usr/local/hive/conf/hive-log4j.properties
13/08/12 01:44:27 INFO hive.HiveImport: Hive history file=/tmp/root/hive_job_log_root_18919@missasia_201308120144_1024128943.txt
13/08/12 01:44:40 INFO hive.HiveImport: FAILED: Error in metadata: MetaException(message:Got exception: java.net.ConnectException Call to amber/211.224.128.234:9000 failed on connection exception: java.net.ConnectException: Connection refused)
13/08/12 01:44:40 INFO hive.HiveImport: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
13/08/12 01:44:40 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive exited with status 1
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:385)
at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:335)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:239)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:425)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
at org.apache.sqoop.Sqoop.main(Sqoop.java:238)

i can't understand why amber trying access amber:9000 .
i just make sure hive-site.conf fs.default.name missasia:9000
and core-site.xml fs.default.name -> missasia:9000

well, how can try access amber:9000 ? ..

Unknown said...

i have table in mysql...i need to import that table to hbase...
Mysql database name is testhadoop and table name is music

i have used this command in sqoop shell:

sqoop import --connect jdbc:mysql://localhost/testhadoop --username root --password ishva --table music --hbase-table hbase_music --column-family hbase_table_col1 --hbase-create-table

i got following errors:

Exception has occurred during processing command
Exception: org.codehaus.groovy.control.MultipleCompilationErrorsException Message: startup failed:
groovysh_parse: 1: expecting EOF, found 'import' @ line 1, column 11.
bin/sqoop import --connect jdbc:mysql://localhost/testhadoop --username root --password ishva --table music --hbase-table hbase_music --column-family hbase_table_col1 --hbase-create-table
^

1 error

can u please help me out

Thanks in advance :)

Bigdata Analytics said...

why Hbase not support sqoop export

Unknown said...

Hi Ankit,
I am running the sqoop import below from Oracle into Hive.
Always I am getting the error below:
mkdirs failed for /user/hive/warehouse/tt, error 13

Please help!
Thanks,
Rizwan


sqoop import --connect jdbc:oracle:thin:@server:port/SID -username USER -password PWD --table TT --target-dir /user/bdsl/data/hive/ttt --hive-table tt --create-hive-table --hive-import --hive-home /opt/mapr/hive/hive-0.11 -m 1
13/10/30 17:46:21 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
13/10/30 17:46:21 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
13/10/30 17:46:21 INFO manager.SqlManager: Using default fetchSize of 1000
13/10/30 17:46:21 INFO tool.CodeGenTool: Beginning code generation
13/10/30 17:46:23 INFO manager.OracleManager: Time zone has been set to GMT
13/10/30 17:46:23 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM QSN_APP.TT t WHERE 1=0

13/10/30 17:46:26 INFO mapred.JobClient: Running job: job_201310231901_28968
13/10/30 17:46:27 INFO mapred.JobClient: map 0% reduce 0%
13/10/30 17:46:35 INFO mapred.JobClient: map 100% reduce 0%
13/10/30 17:46:35 INFO mapred.JobClient: Job job_201310231901_28968 completed successfully
13/10/30 17:46:35 INFO mapred.JobClient: Counters: 16
13/10/30 17:46:35 INFO mapred.JobClient: Job Counters
13/10/30 17:46:35 INFO mapred.JobClient: Aggregate execution time of mappers(ms)=5054
13/10/30 17:46:35 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/10/30 17:46:35 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/10/30 17:46:35 INFO mapred.JobClient: Launched map tasks=1
13/10/30 17:46:35 INFO mapred.JobClient: Aggregate execution time of reducers(ms)=0
13/10/30 17:46:35 INFO mapred.JobClient: FileSystemCounters
13/10/30 17:46:35 INFO mapred.JobClient: MAPRFS_BYTES_READ=87
13/10/30 17:46:35 INFO mapred.JobClient: MAPRFS_BYTES_WRITTEN=4
13/10/30 17:46:35 INFO mapred.JobClient: FILE_BYTES_WRITTEN=18483
13/10/30 17:46:35 INFO mapred.JobClient: Map-Reduce Framework
13/10/30 17:46:35 INFO mapred.JobClient: Map input records=1
13/10/30 17:46:35 INFO mapred.JobClient: PHYSICAL_MEMORY_BYTES=135069696
13/10/30 17:46:35 INFO mapred.JobClient: Spilled Records=0
13/10/30 17:46:35 INFO mapred.JobClient: CPU_MILLISECONDS=1360
13/10/30 17:46:35 INFO mapred.JobClient: VIRTUAL_MEMORY_BYTES=1516257280
13/10/30 17:46:35 INFO mapred.JobClient: Map output records=1
13/10/30 17:46:35 INFO mapred.JobClient: SPLIT_RAW_BYTES=87
13/10/30 17:46:35 INFO mapred.JobClient: GC time elapsed (ms)=0
13/10/30 17:46:35 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 10.1723 seconds (0 bytes/sec)
13/10/30 17:46:35 INFO mapreduce.ImportJobBase: Retrieved 1 records.
13/10/30 17:46:35 INFO manager.OracleManager: Time zone has been set to GMT

13/10/30 17:46:42 INFO hive.HiveImport: 2013-10-30 17:46:42,2913 ERROR JniCommon fs/client/fileclient/cc/jni_common.cc:1636 Thread: 139658836190976 mkdirs failed for /user/hive/warehouse/tt, error 13
13/10/30 17:46:42 INFO hive.HiveImport: FAILED: Error in metadata: MetaException(message:Got exception: java.io.IOException Error: Permission denied(13), file: tt)
13/10/30 17:46:42 INFO hive.HiveImport: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
13/10/30 17:46:42 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive exited with status 1
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:364)

at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)

Unknown said...

HI Rizwan,
I am trying to export csv file from HDFS to mysql i am getting following error, can u please provide solution for this.

sqoop export --driver com.mysql.jdbc.Driver --connect jdbc:mysql://localhost/jobportal --username hadoop --password hadoop --table studentinfo --export-dir /user/siva/jobportal --input-fields-terminated-by '\t' --verbose

13/12/12 14:40:22 INFO mapred.JobClient: Task Id : attempt_201312121250_0009_m_000000_0, Status : FAILED
java.io.IOException: Can't export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)


Ankit Jain said...

Hi Rizwan,
Please look into the logs of task tracker to identify the root cause of issue.




Prabhu Raja Singh said...

Hi Rizwan,

I have installed sqoop in my machine and have mysql JDBC connector.But when i am trying to do sqoop help it is throwing the below error,

Warning: /root/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
/usr/lib/sqoop/bin/sqoop: line 101: /usr/bin/hadoop: No such file or directory

How to fix it.?

Unknown said...

Hi Friends,
Just wanted to share my experience on HBase export to MySql
using sqoop , I tried this and it is not possible directly since HBase data which gets mapped in sqoop to be moved to MySql is in the form of sequence files , thus I had to use Hive as an alternative to convert the data into separated format dump it into HDFS and then export it to MySql using Sqoop.

Ankit Jain said...

Thanks Vaibhav for your input ..

Unknown said...

one-stop-shop!!!!!!!!!
Awesome blog !!!!!!!!

Anonymous said...

Hi Ankit. I have done the query part also. "sqoop import --connect jdbc:mysql://localhost/db1 --username user1 --password xxxx --query 'SELECT t1.col1,t1.col2,t2.col1,t2.col2 FROM table1 t1 JOIN table2 t2 on (t1.col1=t2.col1) WHERE $CONDITIONS AND DATE_SUB(CURDATE(),INTERVAL 365 DAY) <= t1.col2' --split-by t1.col2 --hive-home /root/dev/hive-0.11.0 --target-dir imported_data". Problem is it can be used only to import in HDFS, but I want to import in Hive Table. Can you please help me out with it?

Arun said...

Can we append data into the table using sqoop-import and sqoop-export?

Rio said...

Hi Ankit,

I need your help. I am unable to use sqoop to load data from HDFS (data of Hive table) to MySQL. I get an error message Caused by: java.lang.NumberFormatException: For input string: "1 a""

sqoop export --connect jdbc:mysql://wh1.abcxyz.net/test --table foo --export-dir /user/root/test --username rio --password xyz% -m 1 --input-fields-terminated-by '\001'

java.io.IOException: Can't export data, please check task tracker logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.lang.NumberFormatException: For input string: "1 a"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:492)
at java

$ hadoop fs -cat test/*

1 a
2 b

Thanks,
Rio

Naveen K said...

Hi Friends,

I am facing some issues while exporting from one hive table to multiple destination tables in sql server. If any one knows the above question, please let me know..

Thanks
Naveen

Anonymous said...

Hi Ankit ,
The Split is not getting distributed across all the threads . The Problem is
my primary key is a varchar which has a combination of numbers and chars . Is there any way we can split the numbers and chars and use the numbers for splitting .

Regards ,
VInay

Anonymous said...

How can we export hbase table data into csv file?

Unknown said...

I am having the same issue, I want to export and HBASE table Mysql or SQL server, It appears that there is no direct way and I would need to export HBASE into a flatfile and then import that flat file into the RDBMS. Anyone has any suggestions on how I can do that, what the sqoop commands would be etc. Your help is appreciated

Unknown said...

Hi, I am getting error while importing oracle table into HIVE
error is "please initialize the log4j system property" below is the command which using. can some help?

sqoop import -connect jdbc:oracle:thin:system/abc123@192.168.185.1:1521:xe --username system -p --table system.employee --split-by empid --hive-table mytab --created-hive-table --hive-/user/hive/warehouse -m 1

www.sqlauthentic.blogspot.com said...

Hi Ankit,

I Have executed the below sqoop cmd in HDinsight Remote's CLI,

sqoop.cmd import --connect "jdbc:sqlserver://xxxxxx.database.windows.net;username=xxxxxx@xxxxxx;password=xxxxx;database=xxxx" --table --target-dir /example/data/table2

Above command is getting failed with the below error:

"Warning: ZOOKEEPER_HOME does not exist Accumulo imports will fail. Please set ZOOKEEPER_HOME to the root of your Zookeeper installation."

Please set ZOOKEEPER_HOME to the root of your Zookeeper installation.
14/12/22 13:21:28 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4.2.1.9.0-2196
14/12/22 13:21:28 ERROR tool.BaseSqoopTool: Error parsing arguments for import:
14/12/22 13:21:28 ERROR tool.BaseSqoopTool: Unrecognized argument: û-connect
14/12/22 13:21:28 ERROR tool.BaseSqoopTool: Unrecognized argument: jdbc:sqlserv
r://w9iynmfma2.database.windows.net;username=SQLAdmin@w9iynmfma2;password=Sqlpe
sona0912@;database=persona
14/12/22 13:21:28 ERROR tool.BaseSqoopTool: Unrecognized argument: --table
14/12/22 13:21:28 ERROR tool.BaseSqoopTool: Unrecognized argument: persona
14/12/22 13:21:28 ERROR tool.BaseSqoopTool: Unrecognized argument: --target-dir
14/12/22 13:21:28 ERROR tool.BaseSqoopTool: Unrecognized argument: /example/dat
/table2
14/12/22 13:21:28 ERROR tool.BaseSqoopTool: Unrecognized argument: --as-textfil
14/12/22 13:21:28 ERROR tool.BaseSqoopTool: Unrecognized argument: -m
14/12/22 13:21:28 ERROR tool.BaseSqoopTool: Unrecognized argument: 1

Can you please help.

Thanks in Advance.

Anonymous said...

Please can someone help.Trying to give privileges to root for sqoop import from mysql DB to HDFS:

GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION;

giving below error

ERROR 1045 (28000): Access denied for user ''@'localhost' (using password: NO)

Anonymous said...

During incremental import, is there a way where we can specify different target directory. Because i am not able to use the same target dir for every incremental import , as it errors out saying target dir exists.
So every time i import should i be changing the target director path or is there any other way for this.

Anonymous said...

Hi Ankit ,
Nice blog.
When I tried to import to hdfs using sqoop getting this error
Error reading from database: java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@a299ee7 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.

Hadoop said...

check your mysql-connector jar it should be the latest one. you can get from mysql.com
Add this jar to /usr/lib/sqoop/lib if you are using cloudera distribution.
Restart mysql service it works.
If you still face issues email to me bigdataans@gmail.com

jeet said...

Hi Ankit,

I am able to execute my command from terminal and my teradata table imported successfully but when i am trying to execute the same command from oozie workflow then it is throwing the following error

"ERROR org.apache.sqoop.tool.ImportTool - Encountered IOException running import job: java.io.IOException: Cannot run program "hive": error=2"

I have already add hive-site.xml entry in my sqoop action.
I know this blog is for Sqoop and not for oozie but if you please let me know if you can give some suggestion on this.

Unknown said...

hi Ankit,
In order to use export command to get the data from S3 instead of HDFS, what are the changes that need to be done?

Unknown said...

How about the scenario in sqoop export from hdfs to oracle where the table in oracle has to be mixed case?
I created a schema and table in oracle 11g using sqoop export commands :
Schema Name - TestV100
Table Name - Xy100

I am now trying to load the table using a tab delimited file:
/usr/bin/sudo -u hdfs sqoop export --connect jdbc:oracle:thin:@oracle11:1521:orcl -username test -password password "/hdfs_nfs_mount/tmp/oracle/TestV100/Xy100.txt" --table "\"\"TestV100\".\"Xy100\"\"" --fields-terminated-by "\t" --input-null-string null -m 1

I get this error:
15/11/03 13:38:57 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM "TestV100"."Xy100" t WHERE 1=0
15/11/03 13:38:57 ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.IllegalArgumentException: There is no column found in the target table "TestV100"."Xy100". Please ensure that your table name is correct.
java.lang.IllegalArgumentException: There is no column found in the target table "TestV100"."Xy100". Please ensure that your table name is correct.

This is the input file:
a 0 0
a 1 4
a 2 2

Oracle Version: 11.1.0.6.0
Sqoop Version: 1.4.5-cdh5.4.1
Ojdbc version: ojdbc6.jar


If I connect to the database directly (using sql developer) as 'test' user and run this command, it works fine (it shows zero rows basicaly)
SELECT t.* FROM "TestV100"."Xy100" t WHERE 1=0;

sameer bhand said...

Hi Ankit,
Nice Blog...
I am trying to sqoop export to Sybase IQ.
Have you ever worked with sybase database?

Anonymous said...

@unknown Give the data base and table name all in capital letters. That should work.

Unknown said...

Very good article to explore in sqoop. I also found some useful commands in this link
www.geoinsyssoft.com/blog
kindly look this blog for more information about hadoop eco systems.

Unknown said...

very good article for sqoop-import-to-hbase..thankyou..

Unknown said...

Getting error while running sqoop import on Teradata. It is running fine when executed locally but getting below error on Linux machine.

16/08/29 16:59:46 INFO mapreduce.Job: Task Id : attempt_1470640653996_0801_m_000000_0, Status : FAILED
Error: java.lang.NullPointerException
at com.teradata.tdgss.jtdgss.TdgssConfigApi.GetMechanisms(TdgssConfigApi.java:141)
at com.teradata.tdgss.jtdgss.TdgssManager.(TdgssManager.java:43)
at com.teradata.tdgss.jtdgss.TdgssManager.(TdgssManager.java:29)
at com.teradata.jdbc.jdbc.GenericTeraEncrypt.getTDgssVersion(GenericTeraEncrypt.java:640)
at com.teradata.jdbc.jdbc_4.parcel.ClientConfigParcel.toStream(ClientConfigParcel.java:54)
at com.teradata.jdbc.jdbc_4.io.TDPacket.addParcel(TDPacket.java:98)
at com.teradata.jdbc.jdbc.GenericLogonController.run(GenericLogonController.java:94)
at com.teradata.jdbc.jdbc_4.TDSession.(TDSession.java:207)
at com.teradata.jdbc.jdk6.JDK6_SQL_Connection.(JDK6_SQL_Connection.java:35)
at com.teradata.jdbc.jdk6.JDK6ConnectionFactory.constructSQLConnection(JDK6ConnectionFactory.java:25)
at com.teradata.jdbc.jdbc.ConnectionFactory.createConnection(ConnectionFactory.java:179)
at com.teradata.jdbc.jdbc.ConnectionFactory.createConnection(ConnectionFactory.java:169)
at com.teradata.jdbc.TeraDriver.doConnect(TeraDriver.java:234)
at com.teradata.jdbc.TeraDriver.connect(TeraDriver.java:160)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:213)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:165)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:749)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1707)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)


Note :
I have already added required jar files to set sqoop environment.
$ ls -ltr /var/lib/sqoop
-rwxr-xr-x 1 root root 988468 Aug 29 13:56 terajdbc4.jar
-rwxr-xr-x 1 root root 2455 Aug 29 13:56 tdgssconfig.jar

Unknown said...

hi,i am getting exception while using sqoop import command as,
sqoop import --connect jdbc:mysql://localhost/test --username root --password hadoop --table student -m 1

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.fs.FSOutputSummer.(Ljava/util/zip/Checksum;II)V
at org.apache.hadoop.hdfs.DFSOutputStream.(DFSOutputStream.java:1342)
at org.apache.hadoop.hdfs.DFSOutputStream.(DFSOutputStream.java:1371)

anybody has any idea about this please share me.It is a little bit urgent for me.I tried but i am not able find the reason.any help is appreciated.sqoop's list-tables and list-databases commands are running.

Hadoop said...

Hi Prasanna,
Can you verify JDK is installed on the node where you are running
To list-tables and list-databases using sqoop you dont need JDK
so they might be working fine
For export and import your JDK has to configure correctly
Let me know does this helps

Unknown said...

nice blog.. Commands explanation are very clear so easy and interesting to read..

hadoop training institute in adyar | big data training institute in adyar

Unknown said...

Great and helpful blog to everyone.. This blog having more useful information which having clear explanation so easy and interesting to read.. This blog really useful to develop my knowledge in hadoop and cracking interview easily.. thanks a lot for sharing this blog to us...

hadoop training institute in chennai | big data training institute in chennai

Anonymous said...

Thank you all..helpful blog

God's Grace said...

I receive error when import data from my sql. The command
sqoop import --connect jdbc:mysql://localhost:3306/db --username root --password root --table student --m 1

Error:
17/06/12 19:43:31 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: org.apache.commons.io.FileUtils.isSymlink(Ljava/io/File;)Z
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:143)
at org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:163)
at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:241)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1297)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1294)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:421)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1294)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1315)
at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:196)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
Caused by: java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: org.apache.commons.io.FileUtils.isSymlink(Ljava/io/File;)Z
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:139)
... 23 more
Caused by: java.lang.NoSuchMethodError: org.apache.commons.io.FileUtils.isSymlink(Ljava/io/File;)Z
at org.apache.hadoop.yarn.util.FSDownload.changePermissions(FSDownload.java:394)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:363)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

FOUND JAVA said...

Free easy & simple way to learn java online and much more.. go to =>> http://foundjava.blogspot.in

Unknown said...

Great and interesting article to read.. i Gathered more useful and new information from this article.thanks a lot for sharing this article to us..

big data training and placement | big data training in India

Unknown said...

I tried the following command but unfortunately,I am getting the errors as shown below.Please help.

sqoop import --connect jdbc:mysql://localhost:3306/test --username test --password password --table bk

java.lang.Exception: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class bk not found
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:489)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:549)
Caused by: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class bk not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2216)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getInputClass(DBConfiguration.java:403)
at org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat.createDBRecordReader(DataDrivenDBInputFormat.java:237)
at org.apache.sqoop.mapreduce.db.DBInputFormat.createRecordReader(DBInputFormat.java:263)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.(MapTask.java:515)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:758)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:270)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: Class bk not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2122)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2214)
... 12 more
17/10/08 11:37:48 INFO mapreduce.Job: Job job_local526184001_0001 failed with state FAILED due to: NA
17/10/08 11:37:48 INFO mapreduce.Job: Counters: 0
17/10/08 11:37:48 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
17/10/08 11:37:48 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 12.9375 seconds (0 bytes/sec)
17/10/08 11:37:48 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
17/10/08 11:37:48 INFO mapreduce.ImportJobBase: Retrieved 0 records.
17/10/08 11:37:48 ERROR tool.ImportTool: Error during import: Import job failed!

Unknown said...

Hi,

I am getting the follwing error while importing data from mysql to Hive. Tried searching in google, but left with nothing. Could you please tell why this error

ERROR [main] tool.ImportTool: Imported Failed: No enum constant org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_MAPS

Unknown said...

i learnt new information about Sqoop export and import commands in Hadoop Two which really helpful to develop my knowledge and cracking the interview easily.. This concept explanation are very clear so easy to understand..
Also Check out the : https://www.credosystemz.com/training-in-chennai/best-hadoop-training-in-chennai/

Unknown said...

hi ankit the queries which u gave was very useful to me while working with rdbms,thanks a lot!! Hadoop Training in Velachery .

Unknown said...

sqoop import --connect jdbc:mysql://localhost:3306/sample --username root -P --table data2 --m 1 --target-dir /dhiren/flume/w_data/

ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: org.apache.commons.io.FileUtils.isSymlink(Ljava/io/File;)Z

How can i solve this error ?

Thanks in advance.

pavithra dass said...

Thank you for taking the time and sharing this information with us. It was indeed very helpful and insightful while being straight forward and to the point.
Cloud computing Training in Chennai
Hadoop Training in Chennai
Best Cloud computing Training in Chennai
Cloud computing institutes in Chennai
big data training institute in chennai
Best Hadoop Training in Chennai

Anonymous said...

For every sqoop import, a new file is created in HDFS.
Question:Does every sqoop EXPORT need to be into a NEW RDBMS table or the same(existing) table can be appended/updated more than once ?
Any code help please ?

Question:Can a selective data/columns from HDFS be exported into RDBMS/SQL ?
My understanding is it can be done provided, the un-selected column(s) (which should part of the RDBMS schema) can take NULL values.
Any code help please ?

Anonymous said...

Hi Ankit, please let me know can we include two tables in sqoop in destination table for full load.

Anonymous said...

From this good sense, your thinking are actually similar to a big tornado circulating all around with your mental, and it's really your livelihood to receive these products from the scalp. https://imgur.com/a/vgjgwAH https://imgur.com/a/t05h0mr https://imgur.com/a/bATEPWQ https://imgur.com/a/meVnLSC https://imgur.com/a/DKgrOK8 https://imgur.com/a/Rx5WumC https://imgur.com/a/bsLuRd3

tech bosa said...

Hey there! I simply wish to offer you a big thumbs up for the excellent information you have right here on this post. I am coming back to your web site for more soon.
Techno

Padminiprwatech said...

You are doing a great job by sharing useful information about Hadoop course. It is one of the post to read and improve my knowledge in Hadoop.You can check our How to Import Data from MySQL to HDFS using sqoop,tutorial for more information about Import data from mysql to sqoop.

Harun said...

yurtdışı kargo
resimli magnet
instagram takipçi satın al
yurtdışı kargo
sms onay
dijital kartvizit
dijital kartvizit
https://nobetci-eczane.org/
QWZY

Post a Comment