Inverting the cost curve of air defense using a swarm of cheap autonomous interceptors.
How I got here
I've been following active conflicts through open-source intelligence since I was 14. Not casually — seriously. About 90 Telegram channels: frontline journalists, OSINT analysts, AIS vessel trackers, satellite imagery accounts, intercepted radio feeds. I started doing this because I realized that most of what people know about active conflicts arrives 48 hours late, filtered through outlets that compress a complex situation into a headline.
To make sense of all of it, I built NEXUS — an OSINT correlation engine. It pulls from 35+ real-time feeds, clusters events geographically and temporally across six dimensions, and lets me watch conflicts unfold in near-real-time. NEXUS is how I actually understand what's happening in Ukraine, Sudan, the Red Sea, Gaza. It's a personal tool, not a product, but it works.
After a year of watching through NEXUS, a pattern became impossible to ignore. It wasn't about any particular conflict. It was about the economics underneath all of them.
The math is broken
“Armies are firing million-dollar missiles at twenty-dollar drones. The asymmetry doesn’t favor the defender. It never will, at current price points.”
A Shahed-136 costs around $20,000 to manufacture. Iran produces them by the thousands. Every country facing mass drone attack — Ukraine, Israel, Saudi Arabia — is responding with interceptors that cost 10 to 200 times more per shot.
Iron Dome interceptors: $40,000–$100,000 each. PAC-3 missiles: $4,000,000 per round. Each time a Shahed-136 is destroyed, the defender has spent more than the attacker did to produce the threat in the first place.
The attacker can sustain this indefinitely. A state actor producing drones at scale can absorb these loss rates easily. The defender cannot. And the problem compounds: as drone swarms grow larger and more sophisticated, the cost disparity gets worse, not better.
This is not a technology failure. No one has failed to build a capable interceptor. The failures are economic and structural. The question nobody seemed to be seriously asking was: what would a fundamentally cheaper response look like?
Not cheaper at the margins. Cheaper by an order of magnitude.
What the data showed me
NEXUS didn't tell me to build AEGIS. It showed me the scale of the problem clearly enough that I wanted to. There's a difference.
Watching hundreds of Shahed intercept events accumulate in the NEXUS feed — time, location, interceptor type, estimated cost per shot — the numbers tell a story that no individual report captures. The aggregate burn rate of interceptor stockpiles across conflict zones, measured against drone production rates on the other side, produces a trajectory that only goes one way.
I started thinking about what an honest answer to this problem would look like. Not a clever modification to an existing system. Something built from the problem up, with cost as the primary design constraint from day one.
“If the threat costs $20,000, the defense needs to cost $4,000. Not $400,000. Not even $40,000.”
The insight was simple, even if the engineering isn't: use swarms. Five hundred cheap autonomous units covering the same airspace as one expensive missile battery. No single point of failure. No reload delay. Replaceable at a fraction of the cost.
I spent about three months working through whether this was actually feasible before writing a line of code. The physics checks out. The economics check out. The software engineering is hard but tractable.
The design goals
I had specific things I wanted to be true about this project before I started. Not aspirations — constraints. Hard ones.
- Physics-real constants. Every number in the codebase had to come from a verifiable source: ISA atmosphere model, ITU-R rain attenuation, real motor specs from manufacturer datasheets, MIL-STD-1522A structural factors. No made-up specs.
- A pipeline that actually runs. Not a proof-of-concept sketch. A real 50Hz control loop, with real sensor fusion, real Kalman filtering, real formation physics, running at real-time on standard hardware.
- Honest safety architecture. The weapon had to be locked by default. Hardware failure means LOCKED, not ARMED. Human veto preserved at all times. The ProximityLock had to actually prevent fratricide, not just check a box.
- Provable Byzantine fault tolerance. Drones get jammed and spoofed in real conflict. The system had to keep working with up to 1/3 of sensors compromised, using the same mathematics Lamport proved in 1982 — not a workaround.
- Tractable economics. The bill of materials had to use real parts with real prices from real suppliers (DigiKey, Mouser). The $4,200 unit cost isn't a guess.
What this isn’t
There are several things I deliberately refused to do, because they would have made the project dishonest.
- I didn't make up physics. The V_patrol figure went through multiple corrections — it's 99 km/h, not 114, not 155. The self-test in oc_types.py rejects any value outside the valid range at import time.
- I didn't hide what doesn't work. Wave 2 of the Nevada simulation (three Lancet-3s in formation) took multiple debugging cycles. The cluster explosion bug, the Q matrix structure, the ASSOC_RADIUS problem — all documented, all fixed, all with the original broken version still visible in the commit history.
- I didn't build a toy. A toy simulation uses fake numbers and makes everything intercept. The real simulation uses actual Shahed-136 kinematics, real Doppler noise models, real energy budgets. If a drone can't afford the intercept, it RTBs. That's what the code does.
- I didn't pretend the hardware exists. Tessera MK.II is a design specification and a bill of materials. The CFRP airframe, the Cesaroni booster, the 60GHz mesh radio — none of it has been manufactured. I'm 16 and I'm in Morocco. I'm honest about that.
“The most important engineering decision I made was to be honest about what doesn’t exist yet. A lie in a spec sheet is harder to fix than a bug in code.”
The actual system
AEGIS is a control pipeline. It runs at 50 Hz. One full tick for 50 drones takes 6 milliseconds on a laptop. The architecture has four modules:
M1 — SpectralFusion. Aggregates sensor readings from all drones, filters out Byzantine nodes using Median Absolute Deviation (Lamport 1982 BFT bound: <1/3 of swarm), detects lures using five physical rules derived from real infrared signatures. A flare is above 700K. A cold balloon has contrast below 5K with pixel area above 10. Jettisoned lures decelerate above 15 m/s². These aren't heuristics — they come from the physics of what different objects look like in LWIR.
M2 — MultiTargetUKF. An Unscented Kalman Filter tracking position, velocity, and acceleration in all three axes. The intercept predictor scans 12 seconds forward in 0.1-second windows to find the booster ignition timestamp. Before the Cholesky optimization, this took 37ms. After, it takes 0.9ms. The bug was running a full decomposition every scan step instead of propagating a cheap linear estimate.
M3 — ElasticNet. Spring-physics formation management. Each drone computes forces from its 6 nearest neighbors only — O(6N) instead of O(N²). At 500 drones, that's 83x fewer operations. The swarm holds a hexagonal patrol grid, contracts into an engagement cone when a target is confirmed, and self-heals if nodes drop out.
M4 — EnergyBudget. Three inviolable reserves per drone. 10 Wh for parachute deployment. 40 Wh for return to base. 60 Wh for one combat sprint. The threat worth score combines mass, velocity, and collateral risk into a value between 0 and 100. Below 60: RTB without engaging. 60–80: authorize automatically in sim mode, request Colonel approval in live mode. Above 80: explicit human authorization required regardless.
The honest gap list
There are things that are missing from this project. Some I know how to fix. Others I haven't figured out yet.
- Physical hardware. Tessera MK.II doesn't exist. The BOM is real, the specs are physics-derived, but no unit has been built or tested. This is the biggest gap.
- Wave 2 convergence. The three Lancet-3 scenario in the Nevada simulation is close but not fully converged. The 22m formation spacing makes track association at 18m ASSOC_RADIUS marginal. I know what the fix is; I haven't implemented the full multi-hypothesis tracker yet.
- Real sensor integration. The sensor fusion module uses simulated detections with Gaussian noise. Real IMX678, FLIR Lepton 3.5, and Inxpect Doppler radar have specific noise models, latencies, and failure modes that aren't fully modeled yet.
- 60GHz mesh implementation. The networking protocol exists on paper. The Sivers IQ EVK06002 integration isn't coded; only simulated latency and packet loss are modeled.
- Hardware-in-the-loop testing. Everything runs on a simulated clock. Real-time OS constraints, interrupt latency, and Jetson Orin power management haven't been validated.
- Regulatory and legal framework. Autonomous weapons systems exist in a complicated legal space. I've thought about it; I haven't solved it. The Human Loop Gate is a partial answer — not a complete one.
Where this actually is
This is a serious software project built by a 16-year-old in Morocco with no institutional affiliation, no funding, and no hardware.
The things that are real: the physics, the code, the test suite, the benchmark results, the mathematical foundations. All of it is verifiable. The self-test in oc_types.py validates every physical constant at import time against ISA atmosphere and MIL-STD-1522A bounds. If a number is wrong, the system tells you immediately.
The things that are not real: the hardware, the full operational scenario, the regulatory pathway. I'm honest about this in the README and in every public description of the project.
What this project demonstrates is that the control architecture for a swarm-based air defense system is tractable with current algorithms and commercial hardware. The software engineering is solved. The physics is validated. The economics work. What's needed next is fabrication, testing, and institutional support — none of which I can provide from my position right now.
“The math is real. The code runs. The hardware doesn’t exist yet. I think that’s worth being honest about.”
I'm applying to EPFL (2027), ETH Zürich (2027), and MIT (2027). AEGIS is the primary project in my portfolio. I'm not trying to impress anyone with it — I'm trying to show that I understand a real problem and built a real attempt at solving it. Whether the attempt is good enough is for other people to judge.