Weekend experiments with Generative AI — Part 1

Rahul Pradeep
3 min readMay 29, 2023

--

Generative AI has been blowing everyone’s mind. I am no exception. Like many others, I do my daily conversations with ChatGPT or Bard. They wrote me poems, stories and code. From “meaning of life” to “how do you combine general relativity and quantum mechanics”, I have asked them everything. For the most part, the answers were satisfactory. For some, it made things up (hallucination). Like humans, AI can make mistakes too! This remarkable progress in AI has got me excited, worried and confused all at the same time. I truly think this is going to disrupt the tech industry.

In this series of blog posts, I will jot down my learnings and thoughts from the various experiments I will be doing in the quest to understand GenAI deeply.

I have been playing around with LangChain this weekend. I asked my friend Bard to describe LangChain in about 50 words.

LangChain is a software development framework that simplifies the creation of applications using large language models. It provides a standard interface for LLMs, a selection of LLMs to choose from, and examples of end-to-end applications.

The one abstraction that stood out for me was Agents. Agents are essentially LLMs which are taught to think step-by-step and reason and act at each step. This prompting framework is ReAct framework. For a question, LLMs are prompted to think step-by-step and tries to reason about the next action to take. Agents are given access to a bunch of tools (e.g Google search, Calculator, Code Generator, etc) which lets them actually take those actions. The outputs of the actions taken are observed and the LLM reasons about the next action to take from those observations. This though-action-observation loop continues and it stops when there is no action to take and the question is answered.

To play around with this, I created a simple app based on LangChain agents which loads your CSV data and lets you chat with it. The agent used Pandas as the tool to do operations on the dataset. I used the Kaggle sale conversion optimization dataset for this experiment.

Here are some of the snippets of the chats I did with this agent. The verbose logs depicts how the agent arrives at the result.

Here, the agent was able to understand what a click through rate means. This is sort of expected since it is a pretty common term and the LLM would have an understanding for this.

The agent wasn’t able to understand what conversion rates meant. It instead did all the calculations based on conversions. An honest mistake I guess !

It is fascinating how the agent built up the Pandas query step by step. By this point, I am not at all surprised that the LLM understood that CTR and click through rate meant the same.

It seems prompting smartly makes an LLM smarter. Ideas like chain-of-thought reasoning forms the basis of ReAct. I also stumbled upon this — https://www.promptingguide.ai/ , which is a great guide into explaining some of these concepts with examples.

--

--