<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>pytorch on robr.dev</title>
    <link>https://robr.dev/tags/pytorch/</link>
    <description>Recent content in pytorch on robr.dev</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Wed, 19 Feb 2025 21:26:35 -0800</lastBuildDate><atom:link href="https://robr.dev/tags/pytorch/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Masking video backgrounds with Apple DepthPro and PyTorch</title>
      <link>https://robr.dev/2025/jupyter-pytorch-video-depth/</link>
      <pubDate>Wed, 19 Feb 2025 21:26:35 -0800</pubDate>
      
      <guid>https://robr.dev/2025/jupyter-pytorch-video-depth/</guid>
      <description>Last year I wrote about loading and saving video in a Jupyter Notebook for frame-by-frame processing with PyTorch. Today I&amp;rsquo;d like to explain more of the actual image processing that motivated me. It started from tinkering with Apple&amp;rsquo;s Depth Pro model. I just wanted to see how it performed with some arbitrary video and maybe use it to separate the background from the foreground.
Today I&amp;rsquo;ll focus on just two main tasks that differ from the last notebook:</description>
    </item>
    
    <item>
      <title>Frame-by-frame video processing in a Jupyter Notebook with PyTorch</title>
      <link>https://robr.dev/2024/jupyter-pytorch-video-processing/</link>
      <pubDate>Tue, 19 Nov 2024 22:25:13 -0800</pubDate>
      
      <guid>https://robr.dev/2024/jupyter-pytorch-video-processing/</guid>
      <description>Today&amp;rsquo;s goal is just to load a video, display individual frames in the output from a Jupyter notebook cell, and write the video back out to a new file. In the middle I&amp;rsquo;ll do a little processing on the video frames. The processing is beside the point today - I just want to make the input, interaction, and output work really well so that later I can focus more on that processing step in the middle.</description>
    </item>
    
    <item>
      <title>A local RAG for local memories</title>
      <link>https://robr.dev/2024/local-rag-on-llama-index/</link>
      <pubDate>Tue, 01 Oct 2024 21:51:25 -0700</pubDate>
      
      <guid>https://robr.dev/2024/local-rag-on-llama-index/</guid>
      <description>I read a lot. But I don&amp;rsquo;t read the way I used to read. I used to read books. Now I read articles online, conversation threads, and plenty of Wikipedia. Reading for me now feels a lot less structured and a lot more sprawling than it was when I was younger. It&amp;rsquo;s not just the time in my life that&amp;rsquo;s passed though, the kinds of reading material available have changed a lot.</description>
    </item>
    
  </channel>
</rss>
