Data is everywhere… all around us, but sometimes the medium it is stored in can be a problem when analyzing it. Chances are you have a ton of data sitting around in a relational database in your current application… or you have begged, borrowed or scraped to get the data from somewhere and now you want to use Neo4j to find how this data is related.
Michael Hunger wrote a batch importer to load csv data quickly, but for some reason it hasn’t received a lot of love. We’re going to change that today and I’m going to walk you through getting your data out of tables and into nodes and edges.
Let’s clone the project and jump in.
git clone git://github.com/jexp/batch-import.git cd batch-import
It uses Maven, so if you haven’t already go ahead and install it.
sudo apt-get install maven2
Now let’s assemble the project per the instructions:
mvn clean compile assembly:single
If you did it right, you should see:
[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ [INFO] Total time: 47 seconds [INFO] Finished at: Tue Feb 28 15:50:14 UTC 2012 [INFO] Final Memory: 13M/33M [INFO] ------------------------------------------------------------------------
Awesome… let’s create some test data. Michael packed in a data generator, let’s compile it and run it.
javac ./src/test/java/TestDataGenerator.java -d . java TestDataGenerator
It will take a little while, and then you should see this:
Creating 7500000 and 41242882 Relationships took 13 seconds.
ls -al -rw-r--r-- 1 max max 111388909 2012-02-28 16:11 nodes.csv -rw-r--r-- 1 max max 1217775358 2012-02-28 16:11 rels.csv
So what’s in nodes.csv?
head -5 nodes.csv Node Rels Property 0 4 TEST 1 0 TEST 2 1 TEST 3 1 TEST
The format is property_1, property_2, property_3 separated by tabs… and rels.csv:
head -5 rels.csv Start Ende Type Property 5496772 6842185 FIVE Property 7416995 6166503 FOUR Property 6712458 6853172 THREE Property 1291639 296708 TWO Property
The format is start node reference, end node reference number, relationship type, property_1 also separated by tabs.
Now we are ready to try out this test data. Run the command:
java -server -Xmx4G -jar target/batch-import-jar-with-dependencies.jar target/db nodes.csv rels.csv
…and go grab a soda or cup of coffee unless you happen like watching dots on the screen, as this will take a minute or 3 depending on your hardware. If you are doing this test on an EC2 c1.medium instance it ain’t gonna work (trust me I know), so do it on a box with at least 4 GB of RAM:
Importing 7500000 Nodes took 17 seconds Lots of dots.... Importing 41242882 Relationships took 164 seconds 203 seconds
Ok so where is it?
ls -al target/db -rw-r--r-- 1 max max 67500025 2012-02-28 08:58 neostore.nodestore.db -rw-r--r-- 1 max max 1998458182 2012-02-28 08:58 neostore.propertystore.db -rw-r--r-- 1 max max 1361015130 2012-02-28 08:58 neostore.relationshipstore.db ...and a bunch of other files.
Great. Now assuming you have my Neography gem installed, let’s get a fresh copy of Neo4j and put these in there.
echo "require 'neography/tasks'" >> Rakefile rake neo4j:install mv target/db neo4j/data/graph.db rake neo4j:start
Go to your Neo4j Dashboard and take a look:
Now everything should be working correctly. In part 2 of this series, I’ll show you how to write some SQL queries to get your data into Neo4j.
[…] you’ve been following along, we got Michael’s Batch Importer, compiled it, created some test data, ran it and saw […]
[…] Batch Importer – Part 1: CSV files. […]
[…] the end of February, we took a look at Michael Hunger’s Batch Importer. It is a great tool to load millions of nodes and relationships into Neo4j quickly. The only thing […]
[…] recall, I’ve had three blog posts about the Batch Importer. In the first one, I showed you how to install the Batch Importer, in the second one, I showed you how to use data in your relational database to generate the csv […]
My thesis work requires filling a Neo4j server instance with at least 1M nodes(+ their relationships) as quickly as possible. (I am using Neo4j server instead of embedded as I need to communicate between servers running on different machines)
I tried REST Api Batch Ops(via Neography) but I realised that it is not the way to go. Then I found out your entry and now I am trying to use batch-importer. It works, but it takes too much time. My testbed is a AWS Large instance with 7.5GB ram, 2virtual cores.
As a comparison; you have written that “Importing 7500000 Nodes took 17 seconds”, the same value for me is 8 times larger, 138 seconds.
Batch importer is running for 2.5 hours, still puttings dots but the last and only thing it printed out was “Importing 7500000 Nodes took 138 seconds”.
Do you have any idea what slows down the operation?
Could you please your test configuration…
Thanks a lot for your great blog and for neography…
did you ever solve this issue? I’m facing the exact same problem, I want to add a lot of data into a remote Neo4j Server instance and I don’t want to / can’t shut down the DB for that or taking the embedded approach. Did have any luck in the end?
2.5 hours? Something is not right. Do your nodes and relationships have a ton of properties? Can you check inside the graph.db folder being created and see the file sizes growing? Are you indexing (that’s a bit slower than creating nodes and relationships)? Post your answers on the neo4j google forum and we’ll figure this out.
I was trying to install this using maven as your instructions suggest but I’m getting the following error:
C:\Users\GBS\git\batch-import>mvn clean compile assembly:single
[INFO] Scanning for projects…
[INFO] Building Simple Batch Importer 0.1-SNAPSHOT
[WARNING] The POM for org.neo4j:neo4j-kernel:jar:1.8-SNAPSHOT is missing, no dependency information available
[WARNING] The POM for org.neo4j:neo4j-lucene-index:jar:1.8-SNAPSHOT is missing, no dependency information available
[INFO] BUILD FAILURE
[INFO] Total time: 0.453s
[INFO] Finished at: Sat Oct 20 18:38:24 EDT 2012
[INFO] Final Memory: 6M/77M
[ERROR] Failed to execute goal on project batch-import: Could not resolve dependencies for project org.neo4j:batch-impor
t:jar:0.1-SNAPSHOT: The following artifacts could not be resolved: org.neo4j:neo4j-kernel:jar:1.8-SNAPSHOT, org.neo4j:ne
o4j-lucene-index:jar:1.8-SNAPSHOT: Failure to find org.neo4j:neo4j-kernel:jar:1.8-SNAPSHOT in http://m2.neo4j.org/conten
t/repositories/snapshots was cached in the local repository, resolution will not be reattempted until the update interva
l of Neo4j Snapshots has elapsed or updates are forced -> [Help 1]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
I’ve searched and can’t seem to find anything concerning the error above. Hopefully you can point me in the right direction.
Open up the pom.xml and change the two 1.8-SNAPSHOT to 1.8. Michael will update his repo shortly.
Thanks Max! That got it :)
Tried to compile TestDataGenerator. Initially couldn’t find the file, then found it in /src/test/java/org/neo4j/batchimport/TestDataGenerator.java.
But then got compile errors:
./src/test/java/org/neo4j/batchimport/TestDataGenerator.java:3: package org.junit does not exist
./src/test/java/org/neo4j/batchimport/TestDataGenerator.java:14: cannot find symbol
symbol: class Ignore
Can you post your error to https://groups.google.com/forum/?fromgroups#!forum/neo4j ?
We can better help you there.
[…] are many technical tools out there (definitely look here, here and here, but I needed something simple. So my friend and colleague Michael Hunger came to the […]
is there a release (JAR file) available somewhere? Building it is such a pain…thanks!
You can grab this one from my public dropbox => https://dl.dropbox.com/u/57740873/batch-import-jar-with-dependencies.jar
Thanks! For those who rarely use Maven projects it’s a real help :)
Anyway here is how to do with Netbeans:
– clone the project
– open it in Netbeans
– right-click on the project name, select Properties, then the Actions panel
– select Build with Dependencies and add this goal to the Execute Goals settings: ‘assembly:single’
– add also the property Skip Tests
– press OK
– right-click again on the project name, select Resolve Problems on the bottom to download the dependencies.
– right-click again and select Build with Dependencies
[…] so instead of typing out a million node graph, we’ll build a graph generator and use the batch importer to load it into Neo4j. What I want to create is a set of files to feed to the batch-importer. A […]
[…] 有需要技术教程教我们如何做（比如batch-import,batch importer part，import），但我需要一些简单的方法，所以我的朋友和同事 Michael Hunger前来帮助我，提供了一些方法用于创建一个Excel将数据导入Neo4j. […]
Max, thanks for all these tutorials. Have you noticed that batch-import tool does not support UTF-8 encoding? No accents, no non-English characters at all, this is a massive problem for many of us. I have already raised the issue in github, do you have any idea how to make it work?
I am trying to import 117,000,000+ nodes as well as their relationships and indices on a server using the batchimport jar and running it using netbeans. We did it before but indexing wasn’t implemented correctly so we are kind of debugging and running again to check why indexing isn’t working while trying as much as possible what we want to try on a smaller example (2M nodes without relationships) and then trying the same thing on the big files. The problem is that running this on a server with the big files takes more than 27 hours for each run and we end up with it not working and “oh maybe this is why, run again on a small example, great looks like we found it, run on the big files, 27 hours after: oh not working again”. My question is: is there a way to speed up the running time on this big example with the aforementioned number of nodes?
There is batch.properties file when you run the batch import. The defaults are for a small graph. Tweak these:
To much larger values depending on your expected graph size.
Thanks for your answer. I don’t know what’s going wrong in my machine though as after changing the values to larger values, importing the nodes is taking more than two and a half hours and it had been taking one hour before so it became slower.
Hello, I am having problems executing queries on an already established graph that has 118 million nodes and 140 million relationships. In the beginning it was a memory problem then I changed the initmemory and maxmemory options to proper values (on a server with 250GB of RAM) which made life much better but then while running the very same queries again that proved this memory change to be effective, they are throwing a memory heap exception which is driving me crazy. I think the problem is in the buffer size. The neo4j website speaks about Xmx and the fact it should be increased but I think there is nothing EXPLICITLY written about how and where to change this value of the heap. Last thing I tried after some guesses on this extremely vague info they give on the website about that, I added a wrapper.java.additional=Xmx and wrapper.java.additional=Xss unfortunately to no avail. It even got worse as much as this linux command “cat /proc/meminfo” is concerned as the “buffers” show a smaller value than before. Any directions about how to effectively change the buffer size?
It’s a “broken pipe” exception
[…] Maxdemarzi’s blog left us a detailed steps of importing data as a fresh start, with the tools provided by Michael Hunger. […]
hello people, I have an issue when importing, because there comes a point where I get an error that says “The requested operation can not be performed on a file with a user mapped section open”, anyone could help me?
I tried your command “java -server -Xmx4G -jar target/batch-import-jar-with-dependencies.jar target/db nodes.csv rels.csv ” in order to try to import the nodes and relationships. I also received an error trying to create the test data, which I resolved by TestDataGenerator.java on eclipse (by importing the project as a maven project). Is there a way I can do the imports in a similar manner in eclipse?
run this command to generate the test data: mvn clean test-compile exec:java -Dexec.mainClass=org.neo4j.batchimport.TestDataGenerator -Dexec.classpathScope=test -Dexec.args=sorted
From what I understand in the Neo4j documentations you can either have your neo4j embedded or you can call it through REST. Can you create your neo4f in an embedded environment (Java API) and then access it through REST?
Yes. You can do both.
I am trying batch-import but getting this error.I am new to neo4j can you help me to insert bulk data into neo4j…Actually i want to do performance testing of neo4j.
javac ./src/test/java/TestDataGenerator.java -d .
javac: file not found: ./src/test/java/TestDataGenerator.java
use -help for a list of possible options
I ran into a similar problem. That file doesn’t exist anymore in the newest version on github. I think the tutorial is out of date.
I switched to following the projects readme file => https://github.com/jexp/batch-import/blob/master/readme.md
but it was unclear how to go from csv into neo.
I tried to import data from a csv file and it ran successfully for 100 nodes/records. But when i try to import 300 nodes/records it import only 100 nodes. I don’t know why this is happening. Is there any setting that checks the number of nodes to import ?
Hello I’m running the 2.0 branch. After installation I run the maven command but execution fails on the same files with
Caused by: java.lang.IllegalStateException: Index users not configured.
I thought the program would set up the index automatically? I even tried creating the index manually CREATE INDEX ON :users(name) but still fails on that piece of code
Any suggestions? the indexing functionality looks interesting.
I meant sample not same files
[…] DeMarzi did a great series of blog posts on the Neo4j batch […]
I am trying to find an importer to load .owl file into neo4j?
Can anyone help me on this.
Just to let you know that in Neo4j 2.0 We must set “allow_store_upgrade=true” in neo4j.properties. Under conf folder. Cheers.
I have used a different nodes.csv and rels.csv file.
Everything seems fine and after importing it shows the following message:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
hosted with ❤ by GitHub
But there is not data in db. I have tried executing the following cypher query. “START n=node(*) RETURN n;” and it returns 0 row. It should show at least 4 nodes according to nodes.csv.
Am I missing something?
Eagerly waiting for help.
Did you copy the graph.db directory made by the batch importer into your neo4j/data directory and restart it?
Cannot compile the data generator.. give me errors:
root@srv:/home/alex/batch-import# javac ./src/test/java/org/neo4j/batchimport/TestDataGenerator.java -d .
./src/test/java/org/neo4j/batchimport/TestDataGenerator.java:3: error: package org.junit does not exist
./src/test/java/org/neo4j/batchimport/TestDataGenerator.java:14: error: cannot find symbol
symbol: class Ignore
./src/test/java/org/neo4j/batchimport/TestDataGenerator.java:29: error: cannot find symbol
System.out.println(“Using: TestDataGenerator “+nodes+” “+relsPerNode+” “+ Utils.join(types, “,”)+” “+(sorted?”sorted”:””));
symbol: variable Utils
location: class TestDataGenerator
When trying to run the included generate.sh script, I’m getting:
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
running on ubuntu server 12.04 lts
2.0 branch of batchimport
When i copy all the files inside the folder graph.db and paste them to data/graph.db of neo4j. After that I start the server but it not works. Then I create the new graph.db and start server aganin and it works fine. Don’t know why?
[…] Batch Importer – Part 1. Data is everywhere… all around us, but sometimes the medium it is stored in can be a problem when analyzing it. […]
[…] use this Neo4j-Batch-Importer to import CSV files directly into the graph (including indexing), ETL-article by Max de Marzi).So now I had my Gephi project, but how to get it into Neo4j? Well, turns out there is a Gephi […]
[…] Neo4j – so how do I do that?There are many technical tools out there (definitely look here, here and here, but I needed something simple. So my friend and colleague Michael Hunger came to the […]
[…] to TSV. sql2graph was inspired by Max De Marzi blog posts on using batch-import: part 1 ( https://maxdemarzi.com/2012/02/28/batch-importer-part-1/ ) and part 2 ( https://maxdemarzi.com/2012/02/28/batch-importer-part-2/ ) It operates in […]