lmstudio-anythingllm-xapi xAPI to AI – a No Code Solution

If you’ve been following my journey of discovery on how to use xAPI with AI, you’ll be aware of what I’ve tried todate. For those that haven’t, check out the links below to get up to speed.

Whilst still ‘playing’, <cough> I mean researching, I stumbled across an article that was a game changer (for me anyway!)

Allow me to share.

I was already using LM Studio as part of my journey, so was familiar with how that works from querying an LLM on my local machine through many hours of trial and error.

Through endless hours of researching, a new term kept popping up – Retrieval-Augmented Generation, aka RAG. Asking ChatGPT to explain in laymen’s terms, I get this

  1. Retrieval: When you ask a question, the system first searches a large database or collection of documents to find relevant information. It’s like looking through a library to find the best books or articles related to your question.
  2. Augmented: This retrieved information is then added to the context of your question. It’s as if you read a few pages from those relevant books to get a better understanding.
  3. Generation: The language model, using this augmented context, generates a response. It’s like an expert who, after reading the relevant information, gives you a well-informed answer.

So, from what I am reading, this is exactly what I am trying to do! Not necessarily train an LLM but feed it data. In saying that (and now thinking out loud), I may need to look at training an LLM to understand and interpret xAPI… hmm, next step perhaps?

Right, with that out of the way, let’s look at how we can pull this together.

Download LM Studio from https://lmstudio.ai/

Find a model to run in LM Studio. There are 1,000’s of models, and very overwhelming. You can see them all at Hugging Face. So far, for this part of the journey, I ended up trying the following models.

It’s an ongoing trial to see which ones work best. If you run out of space (as some of these models can get quite large), just delete and try another one.

Here’s the next little nugget that changed it for me. AnyThingLLM.

For those that don’t know, this tool allows you to combine the local server running on LM Studio (with the selected model) and upload just about any data to query on into an AnyThingLLM workspace.

It is still a work in progress, but this cuts down on heaps of time and you don’t need any programming knowledge, allowing for much more scope in trying different models and data formats.

Below is what I came up with the using the Mistral Instruct V0 2 7B Q4 model with raw xAPI JSON uploaded into the workspace.

Not bad for a no code solution!

Speaking of data, we need to get data from an LRS. I derived a little web app (that is yours to download and use) that will connect to an LRS, fetch statements and output in either CSV, JSONL or xAPI JSON.

Whilst this is not perfect, it gets you started on your own journey of discovery.

Direct download and use of the code can be found at https://xapi.com.au/demos/ai-csv/.  The public ADL LRS is used for testing. Note that the hosted solution has been set to only return no more than 1000 statements, but you can also download it for your own use.

The little gem that got me here can be found at:


Leave a Reply

Your email address will not be published. Required fields are marked *

© The Digital Learning Guy | xapi.com.au
ABN 364 4183 4283