Learn and Do with Peter, 2024-03-09

The overall topic today was "Why (or why not) to build our own LLM front-end", with lots of smaller topics mixed in. The topic may be continued tomorrow, unless attendees want to switch to a different topic.

YouTube: https://youtu.be/uIB7jz6h_Vo

Next: Learn and Do with Peter, 2024-03-10

Previous: AI 101 Live Session, 2024-03-02

Notes: Learn and Do with Peter Notes, 2024-03-09

AI Summary (hopefully useful, may be inaccurate)

Quick Recap

Pete and the team discuss a variety of topics, including advanced skills, AI work, image generators, and computer setup. They explore the idea of creating an alternative LLM front end using Python and consider practical projects. Pete emphasizes the importance of understanding the structure of a language model and the benefits of knowing basic components. The team discusses the possibility of using alternative front-ends to access multiple AIs and the potential cost-effectiveness of various software tools. They also consider creating their own alternative front end for Chat GPT, highlighting the learning and customization possibilities. Towards the end, they discuss the pricing of AI engine access and potential risks and future of search engine strategies and LLM front ends.

Summary

Pete's Discussion on Advanced Skills and Image Workflow

Pete initiated a conversation about various topics without a specific agenda. The discussion revolved around advanced skills, AI work, image generators, and computer setup. Pete shared his experiences with different terminal mode interfaces and viewers like Phoenix slides, X view, iRFund view, and Xnv. He also highlighted the necessity of an image workflow when dealing with a large number of images and suggested open-source tools like Darktable. Finally, he discussed his search for alternative interfaces, preferring a web-based or multi-platform solution. Exploring Alternative LLM Front Ends and Practical Projects Pete, Jeannie, and R discuss the possibility of creating an alternative Llm front end for multiple LMs, focusing on writing a simple front end using Python and working with Claud and Openai. They also consider working on practical projects that would be useful for each person, such as learning mid-journey tips and tricks. Jeannie shares her experience with a project called Dx OS composer, which is a blockchain-based front-end for Dolly.

Understanding Back Ends: A Language Model Perspective

Pete discussed the structure of a large language model, comparing it to a restaurant's front and back office. He emphasized the simplicity of the back end and the importance of understanding its workings to make informed decisions about front ends and back ends. Pete also recommended exploring the engine under the hood of a luxury car as an analogy for understanding the workings of the back end. Towards the end, he suggested using the terminal, learning simple GUI concepts, and learning Python to unlock more capabilities.

Language Model Components and Front Ends

Pete discussed the benefits of understanding the basic components of language models, likening it to being able to open a car's hood and recognize the parts. He suggested the importance of knowing how to navigate these systems, rather than needing to understand their internal workings. Jeannie expressed interest in learning more about building the front end of the LLM. The team also considered the possibility of using alternative front ends to access multiple AIs at a lower cost, though Pete cautioned that exploring these could be complicated. R raised the question of whether there are services that could perform this task for a fee. Pete concluded by saying they would revisit the idea of writing their own front end.

Exploring Alternative Front-End Solutions

Pete discussed the homework assignment, which involved sharing notes on a website and encouraged participants to interact and contribute. He also brought up the idea of exploring pre-built alternative front-end solutions and explained the concept of a 'stack' in programming terms. Pete shared the link to the notes tool again and discussed the process of finding alternative front-ends using search engines. He emphasized the challenges of finding alternatives for emerging technologies and proposed creating a hackmd page or a section in Obsidian to list and evaluate different options. He also touched upon the concept of a stack and the importance of creating a checklist to track tasks.

Software Tools and Cost-Effectiveness

Pete discussed the features and potential uses of Mac Tbt and Chat Gpt software, expressing concerns about the cost of Mac Tbt. He also suggested the potential use of another software, Obsidian, for free with a plugin called text generator. Pete highlighted the importance of cost-effectiveness and trustworthiness of software tools. He discussed the difficulties of finding alternative front ends for Mac Gpt and the issues he encountered with a premium front-end platform, noting its lack of transparency and potential bugs. Pete also mentioned issues he encountered with a free chat bot UI tool, expressing frustration and questioning its reliability. Despite these issues, Pete noted that some software tools had useful features.

Creating Alternative Front End for Chat GPT

Pete and R discussed the possibility of creating their own alternative front end for Chat GPT. Pete shared that he had already written and used an alternative front end, which he referred to as 'self', and emphasized the learning and customization possibilities this project could offer. R suggested the idea of creating their own front end could save time and allow for better understanding. However, Pete pointed out that he wasn't currently using an alternative front end for text, but he did use one specifically for image descriptions that he had developed himself. The team decided to consider creating their own front end as a learning and customization project, with Pete suggesting the idea of doing it in stages.****

AI Engine Pricing and Alternatives Discussed

Pete discussed the pricing of AI engine access, noting that it may seem free but is actually very cheap, costing around $20 a month. He emphasized that the real business of these companies is building the engine, not the end product. He suggested that finding or building an alternative front end for the engine could lead to significant cost savings. Pete also touched on the business model of some companies that offer free usage of AI engines as a way to draw users into their own business models. He then spoke about a potential shift in an ongoing project, which might focus on image generation if certain attendees were present. He demonstrated the use of a tool that helps generate Python scripts and explained the process. However, he encountered an issue with an expired API key that they were close to fixing. Pete estimated that with some more time, they could get the tool to accept input rather than just files.

Search Engine Strategies and API Keys

Pete and R discussed the potential risks and future of search engine strategies and LLM front ends. Pete suggested acting as a middle man to mitigate the risk of platform disappearance, noting that API keys typically evolve over time. He likened API keys to swipe cards, explaining their role in accessing various services provided by an AI system's kitchen. Pete also described how API keys can change due to terms and conditions or pricing changes, and are inexpensive and easy to replace. Jeannie found Pete's explanation helpful and asked further questions about potential effects on their system.