Tag Archives: testing

Neo4j is faster than MySQL in performing recursive query

5mysql

A user on StackOverflow was wondering about the performance between Neo4j and MySQL for performing a recursive query. They started with Neo4j performing the query in 240 seconds. Then an optimized cypher query got them down to 40 seconds. Then I got them down to…
Continue reading

Tagged , , , , , , , , , , ,

Our own Multi-Model Database – Part 6

shitty6

Back in Part 2 we ran some JMH tests to see how many empty nodes we could create. Let’s try that test one more time, but adding some properties. Our nodes will have a username, an age and a weight randomly assigned. It’s not a long test, but just enough to give us a ballpark.
Continue reading

Tagged , , , , , , ,

Our own Multi-Model Database – Part 4

shitty4

Please read parts 1, 2 and 3 before continuing or you’ll be lost.

We started adding an HTTP server to our database last time and created just a couple of end points. Today we’ll finish out the rest of the end points. We’ll also be good open source developers by hooking in Continuous Integration , Test Coverage and Continuous Deployment.

Continue reading

Tagged , , , , , , , ,

Our own Multi-Model Database – Part 3

shitty3

If you haven’t read part 1 and part 2 then do that first or you’ll have no clue what I’m doing, and I’d like to be the only one not knowing what I’m doing.

We’ve built the beginnings of this database but so far it’s just a library and for it to be a proper database we need to be able to talk to it. Following the Neo4j footsteps, we will wrap a web server around our database and see how it performs.

There are a ton of Java based frameworks and micro-frameworks out there. Not as bad as the Javascript folks, but that still leaves us with a lot of choices. So as any developer would do I turn to benchmarks done by other people of stuff that doesn’t apply to me, and you won’t believe what I found –scratch that, yes you will, I got benchmarks.
Continue reading

Tagged , , , , , , ,

OUR OWN MULTI-MODEL DATABASE – PART 2

shitty2

If you haven’t read part 1 then do that first or this won’t make sense, well nothing makes sense but this specially won’t.

So before going much further I decided to benchmark our new database and found that our addNode speed is phenomenal, but it was taking forever to create relationships. See some JMH benchmarks below:

Benchmark                                                           Mode  Cnt     Score     Error  Units
ChronicleGraphBenchmark.measureCreateEmptyNodes                    thrpt   10  1548.235 ± 556.615  ops/s
ChronicleGraphBenchmark.measureCreateEmptyNodesAndRelationships    thrpt   10     0.165 ±   0.007  ops/s

Each time I was creating 1000 users, so this test shows us we can create over a million empty nodes in one second. Yeah ChronicleMap is damn fast. But then when I tried to create 100 relationships for each user (100,000 total) it was taking forever (about 6 seconds). So I opened up YourKit and you won’t believe what I found out next (come on that’s some good clickbait).
Continue reading

Tagged , , , , , , ,

Our own Multi-Model Database – Part 1

shittydb

I may be remembering this wrong, but I think it was Henry Rollins who once asked, “What came first, the shitty Multi-Model Databases or the Drugs?” His confusion was over whether:

A) there were a bunch of developers dicking around with their Mac laptops and they wrote a shitty database, put it on github, posted on hacker news, and then other developers who were on drugs started using it or…

B) there were a bunch of developers on ketamine and ecstasy and somebody said lets write a shitty database

I think “A” is what probably happens and how we end up with over 300 databases on DB Engines. But what about “B” ? Well I don’t have any good stuff lying around, but I did hurt my foot the other day and the doctors gave me some Tramadol, so lets down some of that and see what happens.
Continue reading

Tagged , , , , , , , ,

Scaling Cypher Writes

salt-pepa-writes

Let’s talk about writes, baby. Let’s talk about you and me. Let’s talk about all the good things. And the bad things that may be. Let’s talk about writes, and indexing and batching, and transactions in Neo4j. Let’s start with my environment. A 3 year old MacBook Pro (dying to get the new ones… once they finally come out) running a 4 core 2.3 GHz Intel Core i7 that is hyper-threading and pretending to have 8. An Apple SM256E SSD that is about average as far as SSDs go. So definitely not a production grade server, so bear that in mind.
Continue reading

Tagged , , , , , ,

Benchmarks and Superchargers

Interceptor

For the most part, I hate competitive benchmarks. The vendor who publishes them always seems to come out on top regardless. The numbers are always amazing, but once you start digging in a little bit you start to see faults in what is actually being measured and it never applies to real world workloads. For example you have Cassandra claiming 1 Million writes per second on 300 servers. Then Aerospike claiming 1 Million writes per second on 50 servers. MongoDB claiming almost 32k writes per second on a single server, but claiming Cassandra can only do 6k w/s and Couch can only do 1.2k w/s on a single server… Then ScyllaDB has almost 2 Million writes per second on 3 servers blowing everybody away.
Continue reading

Tagged , , , , , ,

Using the Testing Harness for Neo4j Extensions

harness

I’ve been creating both unit tests and integration tests for Neo4j Unmanaged Extensions for far too long. The Neo4j Testing Harness was introduced in version 2.1.6 to simplify our lives and just do integration tests. Let’s try it on and see just how awesome we look. First thing we need to do is add the dependency to our project:
Continue reading

Tagged , , , , , , ,

Giving Neo4j 2.2 a Workout

rhino_running

Neo4j 2.2 is getting released any day now, so let’s put the Release Candidate through its paces with Gatling. Once we download and start it up, you’ll notice it wants us to authenticate.
Continue reading

Tagged , , , , , , , ,