Mitchell reflects as he departs HashiCorp

I have some bittersweet news to share with you all today: I’ve decided to move on from HashiCorp, and I’ll soon no longer be an employee with the company. I recently celebrated 11 years since starting HashiCorp, and as I reflect back on the last decade I couldn’t have asked for a better way to […]

Modern iOS Navigation Patterns · Frank Rausch

This page collects all the familiar navigation patterns for structuring iOS apps, like drill-downs, modals, pyramids, sequences, and more! Think of it as an unofficial bonus chapter for Apple’s Human Interface Guidelines, written by someone who cares deeply about well-crafted user interfaces. Source: Modern iOS Navigation Patterns · Frank Rausch

Elizabeth Laraki on X: “15 years ago, I helped design Google Maps. I still use it everyday. Last week, the team dramatically changed the map’s visual design. I don’t love it. It feels colder, less accurate and less human. But more importantly, they missed

15 years ago, I helped design Google Maps. I still use it everyday. Last week, the team dramatically changed the map’s visual design. I don’t love it. It feels colder, less accurate and less human. But more importantly, they missed a key opportunity to simplify and scale.

Mixture of Experts Explained

So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements: Sparse MoE layers A gate network or router Source: Mixture of Experts Explained

Mixtral of experts | Mistral AI | Open source models

This technique increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of parameters per token. Concretely, Mixtral has 46.7B total parameters but only uses 12.9B parameters per token. It, therefore, processes input and generates output at the same speed and […]

mudler/LocalAI: :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs ggml, gguf, GPTQ, onnx, TF compatible models: llama, l

LocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. Source: mudler/LocalAI: :robot: […]

GPT in 60 Lines of NumPy | Jay Mody

In this post, we’ll implement a GPT from scratch in just 60 lines of numpy. We’ll then load the trained GPT-2 model weights released by OpenAI into our implementation and generate some text. Source: GPT in 60 Lines of NumPy | Jay Mody

Text Editor Data Structures – invoke::thought()

A rope can be a very attractive data structure for text editors. A rope has a lot of nice properties for editing in particular because it splits the file up into several smaller allocations which allow for very fast amortized insertions or deletions at any point in the file, O(lg n). At first glance, it […]

What does vectorDB with langchain solve? : r/LangChain

As for how it works? Heres a simple example. Imagine a Cartesian plane with xy axis. Every point on that grid represents a different word (actually a token). You want to load The Lord of the Rings Trilogy into the grid. So you split the books up into pieces, let’s say 1000 words per chuck. […]