"Simple" tasks is too much of a qualitative statement for me. I've seen customers write queries 10s or 100s of lines/statements in length that perform really well in Neptune. It all comes down to how many objects have to be referenced by the query to compute the result. If you need to do this at-scale, then using some means of concurrency/parallelism and multiple queries is the best way to tackle that. Neptune-Export is one way to do that without much effort on your end.
Another method that I failed to mention was Neptune's integration with the AWS SDK for Pandas (https://github.com/aws/aws-sdk-pandas
). This allows you to fetch a large portion of the graph into a Pandas Data Frame, perform computation using pandas libraries (or equivalent libraries that support Data Frames), and then write back into the graph also using a Pandas Data Frame as an input.
But overall, Neptune can be used for both transactional and analytics use cases, it just takes an understanding of your query patterns and how to administer those queries to the database.