GitHub – richards199999/Thinking-Claude: Let your Claude able to think
in your daily tasks, you will find Claude’s inner monolog (thinking process) very very fun and interesting. Source: GitHub – richards199999/Thinking-Claude: Let your Claude able to think
The vagus nerve orchestrates the mind-body connection | Hacker News
Human anatomy, at first glance, often seems wrongly-engineered. After you’ve worked inside dozens of people, you begin to realize that everybody is unique — and nobody is “textbook” anatomy. Who knows what all this goop even does?! Source: The vagus nerve orchestrates the mind-body connection | Hacker News
豪族的进化史 – 知乎
司马迁在《史记·货殖列传》中记载了很多富豪的事迹,其中就有通过投资土地而发财的人称州郡第一的富豪的秦阳。这些富豪操作的手法其实很简单,就是大量的从政府手里借地,然后找人兴建灌溉,变废地为良田,然后雇穷人来耕种,自己收租就好。你看这操作的手法其实和现在的房地产商其实是一个路数,唯一不同的是房地产商从银行借钱,从国家手里买地盖房,然后把房子卖给老百姓或者出租给商店。这样的手法虽然不复杂,但是前提是必须能跟政府搭上线,才能拿到开发土地的权利。而他们雇佣的穷人就是失掉了土地的流民,古书里称作“客”。这些客一开始与一般老百姓也没有什么不同,不过慢慢的就从国家管理的户籍簿上消失了,变成了依附于豪族地主的贱民。 Source: 豪族的进化史 – 知乎
Computational lithography: Driving nanometer precision in microchip manufacturing | ASML – YouTube
Our computational lithography software is powered by physical models and algorithms enabled by cutting-edge machine learning and data science techniques. From early design to high-volume manufacturing, it lets us use our unique knowledge about the inner workings of lithography systems to optimize the chip patterning process. Layer by layer, chip by chip, wafer by wafer, […]
A quote from Steven Johnson
The emergence of long context models is, I believe, the single most unappreciated AI development of the past two years, at least among the general public. It radically transforms the utility of these models in terms of actual, practical applications. Source: A quote from Steven Johnson
Copy Exactly! – Wikipedia
Copy Exactly! is a factory strategy model developed by the computer chip manufacturer, Intel, to build new manufacturing facilities with high capacity practices already in place.[1] The Copy Exactly! model allows factories that successfully design and manufacture chips to be replicated in locations globally.[2] Source: Copy Exactly! – Wikipedia
Generalizing an LLM from 8k to 1M Context using Qwen-Agent | Qwen
In this blog, we have introduced how to build the agent that is capable of handling 1M-context with a 8k-context model. It then becomes obvious how to synthesize the data once the agent is prepared. For instance, we could enlist volunteers to interact with the agents and record the outcomes to construct the fine-tuning dataset. […]
The Crash and Rebirth of a Six-Year-Old Open Source Project – DIYgod
The project was developed six years ago, and many trendy Node.js technologies and dependencies that were touted as the “Next Generation” at that time have become outdated. Many popular new technologies nowadays cannot be applied, such as JSX, TypeScript, Serverless, etc. Its architecture is also very unreasonable, with information about each route scattered in multiple […]
Anthropic’s Prompt Engineering Interactive Tutorial
Claude is sensitive to patterns (in its early years, before finetuning, it was a raw text-prediction tool), and it’s more likely to make mistakes when you make mistakes, smarter when you sound smart, sillier when you sound silly, and so on. Source: Anthropic’s Prompt Engineering Interactive Tutorial
Bytes Are All You Need: Transformers Operating Directly On File Bytes – Apple Machine Learning Research
our model requires absolutely no modality-specific processing at inference time, and uses an order of magnitude fewer parameters at equivalent accuracy on ImageNet. We demonstrate that the same ByteFormer architecture can perform audio classification without modifications or modality-specific preprocessing Source: Bytes Are All You Need: Transformers Operating Directly On File Bytes – Apple Machine Learning […]