// SKUNKWORKS_RESEARCH_HUB

Raw data, experimental workflows, and hardware telemetry from the ReactAI R&D Lab.

2026-01-28STRATEGIC_INSIGHT

AI Adoption in South Africa: Designing for the Grid

In 2026, the biggest constraint for AI in South Africa isn’t compute—it’s kilowatts. While the global North talks about parameter counts, South African founders are talking about Inverter-to-GPU ratios.

The Adoption Gap

94% of SA businesses want AI integration, but "Proof-of-Concept Fatigue" is real. Companies are tired of black-box APIs that send sensitive data to the US and fail during load-shedding.

The ReactAI Take

Adoption in SA requires Sovereign Intelligence. By hosting our "Neural Core" at Teraco and running our R&D hub on a 100% solar-redundant Skunkworks lab, we bypass the grid instability. For a South African enterprise, a "Smart" system is only smart if it's Online. We aren't just building AI; we are building infrastructure that survives the local reality.

Read_Full_Log
2026-01-24EXPERIMENTAL

Image Generation on the Edge: Optimizing FLUX & SDXL

You don't need an H100 cluster to produce world-class synthetic media. In this log, we break down how we run FLUX.1 [dev] on 8GB and 12GB VRAM cards without the "Out of Memory" (OOM) death loop.

The Stack:

  • Quantization is King: We’ve shifted entirely to 4-bit and 8-bit GGUF/NF4 formats. This reduces memory footprint by ~70% with negligible loss in prompt adherence.
  • T5 Offloading: By loading the T5 text encoder into System RAM instead of VRAM, we save crucial gigabytes for the actual diffusion process.
  • The "Clean Cache" Routine: Our custom ComfyUI nodes now trigger a torch.cuda.empty_cache() after every 5th generation to prevent fragmented memory build-up.

Result: We’ve achieved sub-60s generation times on mid-range gaming laptops—proving that the "Persona Foundry" can scale anywhere, from our Sandton hub to a laptop in Tokyo.

Read_Full_Log
2026-01-20STABLE

LoRA Complexity: Capturing 'Elena Novak' Across 1,000 Renders

Character consistency is the "Holy Grail" of AI marketing. If your influencer’s face changes by 5% between posts, the audience’s suspension of disbelief shatters.

The Technical Challenge

Standard LoRA (Low-Rank Adaptation) training often suffers from "Style Bleed"—where the model learns the lighting of your training photos but forgets how to handle new environments.

Our Skunkworks Solution:

  • Dataset Diversity: We use exactly 28 high-resolution images of Elena, but we vary the "Distance" (80% close-ups, 20% full-body) to ensure the model understands her facial geometry at every scale.
  • The "Caption Everything" Rule: We caption everything we don't want the LoRA to learn. By explicitly tagging "red lighting" or "oily workshop skin," we tell the model to focus only on the bone structure and eye shape.
  • Rank & Alpha Balancing: We found that a Rank of 32 and Alpha of 16 provides the best "Identity Lock" for FLUX-based characters, preventing the "Deepfake" uncanny valley effect.
Read_Full_Log