As I work on creating a custom adapter for integrating a specific database with the Microsoft Bot framework, my approach is to develop something similar to the cosmosDBPartitionedStorage class within the bot framework.
From my analysis, it seems that there are three main operations - read, write, and delete - that need to be inherited or implemented from the botbuilder storage. However, are there any aspects from the database standpoint that I should consider during the creation of this adapter which may not be evident from simply reading through the source code layers? For instance, the initialization() method that is cosmos-specific - how should I adapt this for the solution I am aiming for?
My plan involves utilizing two databases, one of which is Redis. I intend to test this setup in an Azure Redis instance during my local development, as I believe it serves as a good starting point. Therefore, initially, the focus will be on creating a Redis adapter.
Update: I eventually opted for a Redis-only cluster solution, which has proven to be stable. While I wasn't able to achieve concurrency checking due to the need for a server-side script (which I am employing for my CRUD operations), this might be addressed in a future update.
The assistance provided by @mrichardson, mentioned in the reply below, was crucial in developing my own data store. I managed to successfully conduct most essential base tests in unit testing for my TypeScript implementation, except for the concurrency test.
Through the utilization of Redis, I crafted an adapter that supports JSON using the RedisJson module. This Redis module needs to be installed via your command line or configuration file settings.
I chose the library IORedis from Luin, and while working with it was challenging due to the integration complexities with Redis and especially when used in a cluster alongside the RedisJson module, it turned out to be a rewarding experience!
Due to opting for the RedisJson module, I had to resort to using LUA scripts like load
and EVALSHA
for every CRUD operation, falling back to EVAL
if necessary and re-establishing the script upon failure.
Although I am uncertain about the significant performance improvement gained from using EVALSHA LUA scripting solely for read and write operations, the Redis documentation does suggest its benefits.
A major benefit of scripting lies in its ability to read and write data swiftly, minimizing latency and making actions like read, compute, write extremely fast. Pipelining cannot facilitate such scenarios efficiently since the client requires the response from the read command before proceeding with the write command.
Yet, my decision to employ scripting primarily stemmed from the limitations of the IORedis client, which lacks native support for RedisJson commands. To circumvent this issue, I either needed to create a custom script (impeding pipelining but offering evalhas fallback) in IORedis or establish my own fallback system from EVALSHA to EVAL.
So far, the results have been impressive!
The codebase caters to a RedisCluster, and once I finalize a few adjustments, I aim to publish it as a TypeScript npm package on GitHub and npm.
Additionally, the inputs also come with a TTL setting, providing a valuable security and performance abstraction ideal for applications like the Microsoft Bot framework.