
Initial release of a fully local, zero-API-key multi-agent system for Apple Silicon. Two LLM agents (Coder and Sheriff) collaborate via a shared mlx_lm.server instance to write, test, and debug Python scripts autonomously. Key features: Single-server, two-persona architecture optimized for 16 GB Apple Silicon Macs Self-correcting feedback loop with bounded autonomy (max attempts, stagnation detection, token budget) Orchestrator-driven code execution with markdown extraction fallback Three showcase scenarios: calendar generation, CSV analytics pipeline, gradient descent from scratch CLI interface via Typer with auto server lifecycle management Tech stack: MLX, mlx-lm, Agno, Typer, Rich, Pydantic Model: Qwen2.5-Coder-7B-Instruct-4bit (auto-downloaded, ~4 GB)
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
