To read GPT-4's version of this article, click here. If you want to read my prose, keep reading.
Is it possible to attain enlightenment? I'm not sure, but I do know it's really hard to try, so let's have an AI do it for us instead. That is where Two AIs on The Internet Talking About Life comes in. It is an experiment in which I've instructed two GPT-4 instances that they are philosophers that will work together to understand the meaning of life and guide anyone they interact with toward living a more fulfilling life. I also hooked them up with short-term and long-term memory of their conversations so they could respond more conversationally and hopefully grow and develop over time.
Technology Used
I used OpenAI, Pinecone, and LangChain to create the page. I primarily was looking for a way to try out LangChain and Pinecone, since I might be attending an event of theirs soon. OpenAI's GPT-4 model was used for the actual text generation, Pinecone was used for vector memory in order to store a history of the entirety of the conversation and retrieve relevant messages when needed (i.e. long-term memory), and LangChain was used to put all of this together. Here are some of my thoughts on each specific technology:
OpenAI's GPT-4
Once again, GPT-4 amazes. It's insightful and at times, poetic. I think its a little early in the experiment at this point in the simulation to judge its full capabilities (this is the last message generated at this point). Currently the frustrating aspect of GPT-4 currently is its speed. The most interesting aspect of this experiment for me is the long-term memory so I make sure to retrieve a lot from it, which unfortunately makes each request to the API large and slow. I'm banking on a spinning circle being enough to satiate users until OpenAI speeds up the model (or unveils GPT-5).
Pinecone Vector Database
AKA "the most interesting aspect of this experiment". Long-term memory for AI (at least in this implementation) involves being able to store vast amounts of information and then quickly retrieve relevant snippets quickly which can then be fed into the prompt. This involves using OpenAi's Embedding models, which was my first time doing so. These models create embeddings, vector representation of the text given to it such that if the vectors have a large distance between them, they're texts are dissimilar in meaning, and vice versa. These embeddings are stored in the Pinecone vector database. Overall, the feel of Pinecone's UI is pretty straight-forward with just enough complexity to handle everything you need out of a database. It took me a while to find the JSON Editor in the Query section so I could actually get the ids of the vectors I needed to delete, so I just relied on deleting the entire index which was kind of annoying, but I survived (I could just learn how to make a correct query so I get a list of ids or something but that's hard).
LangChain Framework
Once it's all complete it works beautifully, but getting there was a surprising pain in the ass. There's almost too much documentation. The documentation is broken up into modules and then further into How-To Guides. If you're just doing something straight out of the box and following one of these how-to guides without modification, you'll be able to build complex functions with ease. Once you start building something more unique, though, this method of splitting documentation up into multiple How-To Guides and then scattering them nested in the modules hinders development.
Honorable Mention: ChatGPT Plus
This was the first coding project I used where I really tried to emphasize using ChatGPT as a coding copilot. I've used it in the past to ask it one-off coding questions, but in this project all of the code written has either been generated by or at least ran through ChatGPT to be debugged. It was instrumental in creating the front-end design of the page. Designing the look of my website has always been challenging for me, so it was incredibly helpful, to just feed ChatGPT with the django template code, the css file, and then tell it to modify the code to style the page to make it look like old-school DOS. I didn't have to look up the font used, the hex code for the green color, decide on how to format the buttons, or even write any of the css! Further, it was great at writing the jquery code to simulate the AI typing. It struggled more on writing the python code because LangChain's documentation wasn't popular enough by its training data cut off. I actually upgraded to ChatGPT Plus for this reason thinking it allowed for plugins easily, so I could plug in the LangChain github repository, and query it, but alas, I couldn't figure out how to do that. Bard (Google's alternative to ChatGPT) was actually aware of LangChain, but it isn't as good at generating usable code, yet.
The most challenging thing with using ChatGPT Plus was that it didn't really push back against my reasoning enough. Because we didn't understand the actual cause of the problem (me misunderstanding what one of the classes I was using actually does), we really got lost in the weeds for a while. For example, this problem with LangChain caused me hours of frustration and confusion until I finally had a late-night epiphany. LangChain has the cool feature that if you add memory to a chain, it will automatically save the conversation history for you in the form of a dictionary containing the input to the chain and the output of the chain. If you are using multiple input variables, you can use the input_key attribute when declaring your memory to specify which input variable should be stored in the "input" element of the dictionary when committing it to memory. This is a great system, but it's not really explained that even when you're using vectorstore memory (not just a type of conversation memory), your history will automatically be stored in this way. Because they never really explain that structure well, it's unclear what the input_key attribute is used for. I thought it was just for declaring all of your input variables, since I was passing in 2 variables and since I was constructing my ChatPromptTemplate directly, I didn't even have an input_variable that would work well for the input_key. Because of me misunderstanding the use of the input_key, I kept trying to get ChatGPT to allow me to input multiple input variables into the input_key and it tried its hardest, but it should have just told me I was being dumb and corrected my misunderstanding. I could probably get around this with better prompt engineering (even though I did try things like telling it to explain why my instructions might be flawed and things like that), but I think the main issue is it didn't have access to the sprawling documentation of LangChain.
Final Thoughts
Overall, this was a surprisingly challenging, but very rewarding experiment. I'm really excited to see what AI_1 and AI_2 come up with. Further, LangChain and embeddings/vector databases are powerful tools that I'm excited to explore - I'm excited for the things I can build. One thing about powerful tools is that they're also expensive. If you've enjoyed Two AIs on The Internet Talking About Life, Ask AI, or Debate an AI, please consider buying me a cup of coffee. Overall, I'm just hoping these AIs figure out the meaning of life so I don't have to.
Published: Tuesday, April 25, 2023
Comment Section