Testing with generated datasets showed that it scaled really well, allowing for searching of millions of memes in less than a second even on relatively modest hardware. At the time of this writing I’m able to index and search the text of around ~17 million memes on a shared Linode instance with only 6 cores and 16GB of RAM. This keeps the costs relatively low, which is important for side-projects if you intend on keeping them running for any amount of time.