Category Archives: Random

Building a Boolean Logic Rules Engine in Neo4j

A boolean logic rules engine was the first project I did for Neo4j before I joined the company some 5 years ago. I was working for some start-up at the time but took a week off to play consultant. I had never built a rules engine before, but as far as I know ignorance has never stopped anyone from trying. Neo4j shipped me to the client site, and put me in a room with a projector and a white board where I live coded with an audience of developers staring at me, analyzing every keystroke and cringing at every typo and failed unit test. I forgot what sleep was, but managed to figure it out and I lost all sense of fear after that experience.

The data model chained together fact nodes with criss crossing relationships each chain containing the same path id property we followed until reaching an end node which triggered a rule. There were a few complications along the way and more complexity near the end for ordering and partial matches. The traversal ended up being some 40 lines of the craziest Gremlin code I ever wrote, but it worked. After the proof of concept, the project was rewritten using the Neo4j Java API because at the time only a handful of people could look at a 40 line Gremlin script and not shudder in horror. I think we’re up to two handfuls now.
Continue reading

Tagged , , , , , , , , , ,

Our own Multi-Model Database – Part 6

shitty6

Back in Part 2 we ran some JMH tests to see how many empty nodes we could create. Let’s try that test one more time, but adding some properties. Our nodes will have a username, an age and a weight randomly assigned. It’s not a long test, but just enough to give us a ballpark.
Continue reading

Tagged , , , , , , ,

Our own Multi-Model Database – Part 3

shitty3

If you haven’t read part 1 and part 2 then do that first or you’ll have no clue what I’m doing, and I’d like to be the only one not knowing what I’m doing.

We’ve built the beginnings of this database but so far it’s just a library and for it to be a proper database we need to be able to talk to it. Following the Neo4j footsteps, we will wrap a web server around our database and see how it performs.

There are a ton of Java based frameworks and micro-frameworks out there. Not as bad as the Javascript folks, but that still leaves us with a lot of choices. So as any developer would do I turn to benchmarks done by other people of stuff that doesn’t apply to me, and you won’t believe what I found –scratch that, yes you will, I got benchmarks.
Continue reading

Tagged , , , , , , ,

OUR OWN MULTI-MODEL DATABASE – PART 2

shitty2

If you haven’t read part 1 then do that first or this won’t make sense, well nothing makes sense but this specially won’t.

So before going much further I decided to benchmark our new database and found that our addNode speed is phenomenal, but it was taking forever to create relationships. See some JMH benchmarks below:

Benchmark                                                           Mode  Cnt     Score     Error  Units
ChronicleGraphBenchmark.measureCreateEmptyNodes                    thrpt   10  1548.235 ± 556.615  ops/s
ChronicleGraphBenchmark.measureCreateEmptyNodesAndRelationships    thrpt   10     0.165 ±   0.007  ops/s

Each time I was creating 1000 users, so this test shows us we can create over a million empty nodes in one second. Yeah ChronicleMap is damn fast. But then when I tried to create 100 relationships for each user (100,000 total) it was taking forever (about 6 seconds). So I opened up YourKit and you won’t believe what I found out next (come on that’s some good clickbait).
Continue reading

Tagged , , , , , , ,

Our own Multi-Model Database – Part 1

shittydb

I may be remembering this wrong, but I think it was Henry Rollins who once asked, “What came first, the shitty Multi-Model Databases or the Drugs?” His confusion was over whether:

A) there were a bunch of developers dicking around with their Mac laptops and they wrote a shitty database, put it on github, posted on hacker news, and then other developers who were on drugs started using it or…

B) there were a bunch of developers on ketamine and ecstasy and somebody said lets write a shitty database

I think “A” is what probably happens and how we end up with over 300 databases on DB Engines. But what about “B” ? Well I don’t have any good stuff lying around, but I did hurt my foot the other day and the doctors gave me some Tramadol, so lets down some of that and see what happens.
Continue reading

Tagged , , , , , , , ,

Delivering a Graph Based Search solution to slightly wrong data

oops

When it comes to databases, having good clean data is always important. More so with Graphs which deal with concepts as nodes and their relationships between them. Inevitably, you will run into messy data and have to deal with it. In a lot of the projects our customers work on they are dealing with connecting multiple data sources to get to a “golden record” or single source of truth. A lofty goal, sometimes impossible to achieve, but we can use the relationships of the data to help us come close.

One option is to extract the features (or tags) of a composite object and see if any other object shares most of these features. If that is the case then they are possibly the same object and should be merged instead of creating a new record. A partial subgraph match is something akin to a recommendation engine in Neo4j and pretty trivial to write. Take a look back at a few old blog posts for ideas.
Continue reading

Tagged , , , , , , , , , ,

Benchmarks and Superchargers

Interceptor

For the most part, I hate competitive benchmarks. The vendor who publishes them always seems to come out on top regardless. The numbers are always amazing, but once you start digging in a little bit you start to see faults in what is actually being measured and it never applies to real world workloads. For example you have Cassandra claiming 1 Million writes per second on 300 servers. Then Aerospike claiming 1 Million writes per second on 50 servers. MongoDB claiming almost 32k writes per second on a single server, but claiming Cassandra can only do 6k w/s and Couch can only do 1.2k w/s on a single server… Then ScyllaDB has almost 2 Million writes per second on 3 servers blowing everybody away.
Continue reading

Tagged , , , , , ,

Modeling Airline Flights in Neo4j

Actor Leonardo DiCaprio as Frank Abagnale in the Steven Spielberg movie "Catch Me If You Can"

Actor Leonardo DiCaprio as Frank Abagnale in the Steven Spielberg movie “Catch Me If You Can”

If you’ve come to any of the Neo4j Data Modeling classes I’ve taught, you’ve must have heard me say “your model depends on both your data and your queries” about a million times. Let us take a closer dive into what this means by looking at how one might model airline flight data in Neo4j.
Continue reading

Tagged , , , ,

Importing the Hacker News Interest Graph

HackerNews-799e9e47

Graphs are everywhere. Think about the computer networks that allow you to read this sentence, the road or train networks that get you to work, the social network that surrounds you and the interest graph that holds your attention. Everywhere you look, graphs. If you manage to look somewhere and you don’t see a graph, then you may be looking at an opportunity to build one. Today we are going to do just that. We are going to make use of the new Neo4j Import tool to build a graph of the things that interest Hacker News.
Continue reading

Tagged , , , , , , , , , , , ,

Caching Immutable Id lookups in Neo4j

GiveMeTheCache

If you’ve been following my blog for a while, you probably know I like using YourKit and Gatling for testing end to end requests in Neo4j. Today however we are going to do something a little different. We are going to be micro-benchmarking a very small piece of code within our Unmanaged Extension using a Java library called JMH.

Continue reading

Tagged , , , ,