Generative AI & LLMs
Explore Dev Corner articles and subtopics in Generative AI & LLMs. This hub page collects practical builds, tools, and engineering guides for sovereign local AI.
Topic breadth
1
Active builds, guides, and subtopic coverage.
Subtopics
LLM Foundations
View topicHow large language models work: tokens, context windows, attention, sampling parameters, and the sovereign distinction between open-weight and proprietary API-dependent models.
Prompt Engineering
View topicCraft effective prompts for sovereign LLM deployments: chain-of-thought, few-shot, system prompt design, prompt versioning, and structured output with JSON mode and Pydantic.
Open-Weight Models
View topicSovereign open-weight model selection and deployment: Llama 4, Mistral, Qwen 3, Gemma 3, and Phi-4. Covers licensing analysis, quantisation formats (GGUF/AWQ), and model capability benchmarks.
Multimodal Builds
View topicBuild sovereign multimodal AI pipelines: vision-language models (LLaVA, Qwen2-VL, Moondream), local image+text reasoning, audio transcription with Whisper, and zero-cloud multimodal stacks.
LLM APIs & SDKs
View topicIntegrate LLMs into sovereign applications via APIs and SDKs: OpenAI-compatible APIs (Ollama, vLLM), the Anthropic Python SDK, streaming responses, token counting, and API cost control.