<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>llm on robr.dev</title>
    <link>https://robr.dev/tags/llm/</link>
    <description>Recent content in llm on robr.dev</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Sun, 14 May 2023 18:41:18 -0700</lastBuildDate><atom:link href="https://robr.dev/tags/llm/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Oobabooga, GPT4All, and FastChat LLM frontends: ttftw 2023w20</title>
      <link>https://robr.dev/2023/ttftw-2023w20/</link>
      <pubDate>Sun, 14 May 2023 18:41:18 -0700</pubDate>
      
      <guid>https://robr.dev/2023/ttftw-2023w20/</guid>
      <description>Three things from this week.
I tried out three different ways to use an LLM on my home PC this week. A Large Language Model (LLM) is the kind of ML model that runs inside of ChatGPT and other similar popular chatbots. Running on my home PC lets me see just what they can do and whether they&amp;rsquo;re useful to me. While there have been a whole lot of different models emerging lately, there are also a few different frontends or user interfaces that can load the model and perform inferences.</description>
    </item>
    
  </channel>
</rss>
