Big graph makes timeouts

I am having trouble querying big graph especially when it comes to apply filters. I want to order the nodes so that I can take highest degree ones, but the graph is always throwing timeouts, and the only trick i am applying is pre-limiting the accessed nodes
g.V()
.hasLabel("Word")
.tail(100000)
.order().by(outE("RetrievedBy").count(), desc)
.limit(100)
.project("term", "degree")
.by("term")
.by(outE("RetrievedBy").count())
g.V()
.hasLabel("Word")
.tail(100000)
.order().by(outE("RetrievedBy").count(), desc)
.limit(100)
.project("term", "degree")
.by("term")
.by(outE("RetrievedBy").count())
I am using Neptune with instance (db.r6g.xlarge)
Solution:
A few things here - 1. Neptune was originally designed as a database more in the mindset of TinkerPop OLTP, where queries that perform best have a constrained set of starting conditions with limited query frontier (the projected number of possible objects that may need to be assessed during query computation). Queries that traverse < 1M objects in the graph will perform with ~100ms of latency. Queries that need to process more that that will have a latency that scales linearly with query frontier. 2. For the most part, Gremlin queries are executed single-threadedly inside of Neptune. Each Neptune instance has a number of query execution threads equal to 2x the number of vCPUs on that instance. More on the resource allocation here: https://docs.aws.amazon.com/neptune/latest/userguide/instance-types.html 3. The Graviton 2 processors ( the "g" noted in the instance type ) are great for smaller OLTP queries and will show a better performance than the Intel processors for those queries. It has been noted in other forums (https://www.anandtech.com/show/15578/cloud-clash-amazon-graviton2-arm-against-intel-and-amd), however, that the Graviton 2 processors have a TLB that is less performant than same generation Intel processors, making memory-intensive processing (slightly) less performant. So if you plan on running queries with a larger query frontier, using the Intel processors will show some gains (vice versa with smaller queries)....
Jump to solution
7 Replies
spmallette
spmallette17mo ago
That's a pretty expensive traversal. I assume you've already maxed the timeout possible for Neptune and still get the timeout failures?
M. alhaddad
M. alhaddad17mo ago
yes, i have maxed out the timeout, so i start to get memory errors I have around 20Million nodes, and pre-limiting is just not a solution although this https://groups.google.com/g/aureliusgraphs/c/TOQ2618KDnY worked, but i was not sure about making possible further computations, the simplest score I am having is the number of outbounding edges yet it is expensive traversal
spmallette
spmallette17mo ago
@neptune what is the best advice here?
Solution
triggan
triggan17mo ago
A few things here - 1. Neptune was originally designed as a database more in the mindset of TinkerPop OLTP, where queries that perform best have a constrained set of starting conditions with limited query frontier (the projected number of possible objects that may need to be assessed during query computation). Queries that traverse < 1M objects in the graph will perform with ~100ms of latency. Queries that need to process more that that will have a latency that scales linearly with query frontier. 2. For the most part, Gremlin queries are executed single-threadedly inside of Neptune. Each Neptune instance has a number of query execution threads equal to 2x the number of vCPUs on that instance. More on the resource allocation here: https://docs.aws.amazon.com/neptune/latest/userguide/instance-types.html 3. The Graviton 2 processors ( the "g" noted in the instance type ) are great for smaller OLTP queries and will show a better performance than the Intel processors for those queries. It has been noted in other forums (https://www.anandtech.com/show/15578/cloud-clash-amazon-graviton2-arm-against-intel-and-amd), however, that the Graviton 2 processors have a TLB that is less performant than same generation Intel processors, making memory-intensive processing (slightly) less performant. So if you plan on running queries with a larger query frontier, using the Intel processors will show some gains (vice versa with smaller queries).
triggan
triggan17mo ago
Overall, if you want to run these larger queries in Neptune, the best advice that I can give you is to find a way to divide up the workload into smaller queries are run them with concurrency (noting the concurrency/thread architecture that I mentioned above). One possible way to do this is to use the Neptune-Export tool (https://github.com/awslabs/amazon-neptune-tools/blob/master/neptune-export/readme.md). Neptune-Export was originally designed to export an entire graph, but it can also accept a number of "partitioned" queries (https://github.com/awslabs/amazon-neptune-tools/blob/master/neptune-export/readme.md#exporting-the-results-of-user-supplied-queries). It will create separate client threads/connections and issue these to Neptune with concurrency matching the Neptune instance size. By doing this, you effectively create an analytics/OLAP engine that can run on top of Neptune.
GitHub
amazon-neptune-tools/readme.md at master · awslabs/amazon-neptune-t...
Tools and utilities to enable loading data and building graph applications with Amazon Neptune. - amazon-neptune-tools/readme.md at master · awslabs/amazon-neptune-tools
M. alhaddad
M. alhaddad17mo ago
thanks @triggan I just tried to run a query with a limit of 1Million and yes it was fine, so a possible solution is to batch the target within range(). I've just read that Neptune is not intended for analytics, it is intended for simple OLTP tasks, could that be the reason - am doing something on a DB not optimized todo so.. https://www.infoworld.com/article/3394860/amazon-neptune-review-a-scalable-graph-database-for-oltp.html#:~:text=Gremlin%20and%20SPARQL%20address%20different,with%20SELECT%20and%20WHERE%20clauses. If so what can i do? I have large number of nodes and sometimes I need to traverse relationships sometimes I need to do some predictions/analytics tasks
InfoWorld
Amazon Neptune review: A scalable graph database for OLTP
Amazon’s graph database service offers ACID properties, immediate consistency, and auto-scaling storage for billions of relationships
triggan
triggan17mo ago
"Simple" tasks is too much of a qualitative statement for me. I've seen customers write queries 10s or 100s of lines/statements in length that perform really well in Neptune. It all comes down to how many objects have to be referenced by the query to compute the result. If you need to do this at-scale, then using some means of concurrency/parallelism and multiple queries is the best way to tackle that. Neptune-Export is one way to do that without much effort on your end. Another method that I failed to mention was Neptune's integration with the AWS SDK for Pandas (https://github.com/aws/aws-sdk-pandas). This allows you to fetch a large portion of the graph into a Pandas Data Frame, perform computation using pandas libraries (or equivalent libraries that support Data Frames), and then write back into the graph also using a Pandas Data Frame as an input. But overall, Neptune can be used for both transactional and analytics use cases, it just takes an understanding of your query patterns and how to administer those queries to the database.