SVRGN Weekly Digest #43 💫
Hаступ Машин 2.0 , Art/engineering, manufacturing wars and ELO ratings
This morning I gave a short address to the participants of Hаступ Машин 2.0 (Machine Offensive 2.0), an ongoing hackathon in Kyiv around defense against some of the latest Russian tactics. For example, cheap guided aerial bombs, which are used indiscriminately. Here are my notes for the talk.
🤼 People
Adrian Cipriani - artist/engineer at Cipriani Studio
I met Adrian through the brilliant research community series hosted by Alexandria and Raphael Volpert. This week was about simulating physics, but he’s run events around in-space manufacturing and robotic transformers before. The crowd was young, hungry students and builders who want to spend an evening talking about how neural operators work, how V-Sim parallelizes very difficult to parallelize algorithms, and more. Adrian, as an artist and engineer is running a “Renaissance Weekend” in January 2025 in the alps for select interesting people of either or both professions (artist/engineers). There are one or two spots open still, apply here.
💼 Portfolio jobs board
This weeks selection of opportunities from the portfolio:
🚀 Companies (hackathon projects from Paris)
Team 1 - neuromorphic cameras for Shahed interception
Status: there’s a company
Source: EDTH
Founders: [redacted] but go sleuth around
Why it’s cool:
The winning team used event cameras - aka. neuromorphic cameras to track moving objects in the sky and track them using CV. Event cameras work not like normal digital camera sensors, but rather based on the difference in incoming photons. This makes any movement easily detectable, even in very bad lighting conditions. One defense use case is to track incoming drones, e.g., Shaheds (still plaguing Ukraine, though some were spoofed to turn around and go into Belarus).
Team 16 - Magnetic navigation for underwater applications
Status: loose band of geniuses
Source: EDTH
Founders: loose band of geniuses
Why it’s cool:
Water is very annoying, it gets in everywhere and doesn’t let through most of the useful frequencies of the EM spectrum. That makes GNSS and communications very hard. This team mapped the hackathon venue using the magnetometer on their phone and then built a geo-location algorithm based on it. They also figured out a way to scale that to the Black Sea.
Team 23 - Adversarial ML to counter autonomy
Status: loose band of geniuses
Source: EDTH
Founders: loose band of geniuses, at least one founder
Why it’s cool:
Everything is jammed, so drones need autonomy. Surveillance towers need autonomy to sift through all the data, and it’s likely something along the lines of Yolo running in the background. Even if it’s not, there are techniques for tricking the algo’s - aka Adversarial Machine Learning. This team managed to use a Carlini Wagner adversarial attack pattern to make a drone-detection algo think it’s a bird.
💡 Ideas & Science
Control the metal, control the world
An older read, but more relevant than ever. Some metals in even smaller amounts can have critical implications on the productive capacity of industrial nations. As an example, China chose to ramp up production of less technologically advanced batteries (LFPs are safer but lower capacity), and thereby cut cost from under the feet of any Western producers, while maintaining control over the Manganese used in NMP batteries. As another confirmation of the de-globalization and trade wars, China chose to ban exports of critical minerals to the US this week.
Noah Smith strikes again. This is not a surprise to anyone, but the framing is extremely good. Sad numbers mostly on how we (democracies) have lost against state-induced mass manufacturing in China. Germany did great for some time in selling high-end machinery to China, until they stole most of the know-how and designs to now be able to outproduce Germany by orders of magnitude.
There’s a great paper by Andy Jones on Scaling scaling laws in board games, which shows a super neat graph on trade-offs between test-time compute and train-time compute to achieve a certain ELO rating. ELO is kind of like the TrueSkill rating system but for two-player setting. TrueSkill is like the Glicko rating system, which is kind of like the Glicko-2 rating system… Sorry. ELO is a number that tells you how good someone is at a game. And the relationship between test-time compute and train-time is log-linear! I.e., you can either spend 10x train-time compute to achieve a certain ELO rating, or spend 15x test-time compute! Hence, o1 by OpenAI is basically a logical arbitrage of that. Instead of inventing new architectures or investing more training compute, it makes sense to just apply more inference compute.