Interlude: Tech Notes
Dave Kreuzberg's engineering notes on the new tech he's supplied to help Laramie continue adapting to life without her head.
« 11. I Smile Back | CONTENTS | 12. Science and Magic 101 »
Case Report: Laramie Katarina Strong
Patient File: #LKS8932
Date: November 5, 2025
Engineer: David Kreuzberg
Overview
Subject Laramie Strong arrived at the Magic Guild Recovery Home on the above date in serious condition following a failed performance mishap. Ms. Strong lost her head in an interrupted transformation caused by the unfortunately timed death of its sole witness. Her situation was reported by the second-level Magus responsible for the transformation. A retrieval team was alerted and brought Ms. Strong to the Recovery Home within 83 minutes of the report.
Subject Condition on Arrival
Upon admission to MGRH, Ms. Strong presented as a 23-year-old Caucasian woman, 5’4” tall post-incident, with a healthy torso and limbs but complete dimensional folding of the head, appearing as a neck stem three inches above her shoulders.
Skin at the transformation site appears normal and unblemished. Ms. Strong has sensation on the top of her stem, reporting that touches feel like “presses on the side of my neck” and seem to “flicker” from one side to the other without transition. Breathing and circulation are magically sustained.
Model TC-17 Neural Mapping Collar
To establish a baseline neural map for Ms. Strong’s loquette interface, we employed a Model TC-17 training collar. While an older model, the TC-17 is a robust device that should prove adequate for initial calibration.
The collar is lightweight titanium with a padded interior lining for subject comfort. It features a 720p camera, omnidirectional microphone, and a small speaker to facilitate AV input/output during the training phase. A retractable USB-C cable allows high-fidelity data transfer and charging of the unit’s lithium-polymer battery (12-hour runtime).
The TC-17’s neural mapping capabilities are particularly relevant in this case. A dense array of quantum-entangled microtubules, powered by a microscopic Casimir vacuum, establishes a high-bandwidth connection to the subject’s peripheral nervous system. This allows the collar to map somatosensory and motor functions comprehensively.
Engaging the neural mapping sequence initiates a series of preprogrammed stimuli designed to evoke distinct cortical responses. Audiovisual cues are interspersed with low-level electrical impulses to map proprioceptive and nociceptive pathways. An onboard QC coprocessor then analyzes the data to construct a personalized calibration matrix.
Initial tests with Ms. Strong showed promising results, with rapid synchronization between the collar and her quantum-distributed nervous system. However, the lack of direct input from the head required significant interpolation to fill the perceptual gaps. Translation of non-linguistic vocalizations also pushed the TC-17 to its limits.
Despite these challenges, after 48 hours of intensive training, we created a workable neural map that served as the foundation for Ms. Strong’s loquette interface. The map will require extensive refinement, but it’s a start.
Model P-6 Loquette Interface
With the neural mapping complete, we can now focus on Ms. Strong’s primary user interface: the Model P-6 loquette. Spec-wise, the P-6 is a marvel of modern techno-magical engineering.
The loquette is constructed from a single shard of lab-grown obsidian, precision-cut into an elongated hexagonal prism. This provides a durable, thermally stable housing for the delicate internal components.
The P-6 boasts a state-of-the-art stereoscopic camera array featuring twin 4K lenses with variable focus servos. Flanking the cameras is an omnidirectional MEMS microphone capable of isolating speech in noisy environments. The lack of an onboard speaker is compensated for by a high-gain Bluetooth transmitter tuned to interface with a wide range of close-range audio devices.
Rounding out the external features is a charging plate covering an array of flux capacitors that tap into the Higgs field, drawing power from the quantum foam to recharge the internal spintronics battery. A full charge takes roughly two hours and provides 10-12 continuous runtime.
Internally, the P-6 is powered by a 7nm QC tensor core clocked at a blistering 2.8 petaFLOPS. This partners with an array of peripheral microcontrollers to handle sensorimotor processing, haptic feedback, and adaptive algorithm updating. Inertial tracking is provided by a cluster of 9-axis IMUs synchronized via a dedicated UKF coprocessor.
The real magic happens in the P-6’s bespoke neural interface. Building on the map provided by the TC-17 collar, a web of superconducting quantum interference devices (SQUIDs) establishes a direct link to Ms. Strong’s residual nervous system. Signals are parsed by a pre-linguistic translation matrix and rendered into an intuitive AR/VR control schema.
Thoughts are spatialized and projected onto a 180° foveated HUD. Directed by thought and micro gestures, the P-6 facilitates rapid, hands-free interaction. It’s an entirely new way of engaging with the world.
Initial tests have been promising, with Ms. Strong adapting quickly to the thought-driven interface. There have been the expected hiccups and frustrations, but overall, she’s taken to the system with admirable aplomb. Further refinements to the UX will be implemented as additional training data is gathered.
PGS-8 Portable Gustatory Simulator: ‘Eating Box’
The PGS-8 pairs with the P-6 loquette to restore near-normal eating and drinking capabilities. Roughly the size of a ring box, the PGS generates a short-range dimensional fold between itself and the user’s esophagus implant.
Food and beverages placed inside the PGS are flash-scanned by a terahertz emitter array. This produces a high-fidelity spatial map, which is compressed and beamed into the subject’s throat via an encrypted UWB link. The loquette stimulates the subject’s swallow reflex, allowing Ms. Strong to “eat” without a mouth.
Textures and flavors are replicated through careful modulation of the loquette’s neurostimulators. Temperature is conveyed by regulating the phase variance of the dimensional fold. Though not a perfect recreation of the real thing, it provides a close analog of normal eating.
Liquids are handled by a dedicated reservoirless pump embedded in the PGS’s hinge. An omniphobic nanostructured conduit extends from the box, connecting to a flexible hose for mouth-free drinking. A MEMS pressure sensor monitors the flow rate and dynamically adjusts it to match Ms. Strong’s thirst response.
Cleanup is fully automated. Quantum vacuum fluctuations are harnessed to disintegrate any crumbs or residue left in the PGS after each meal. The PGS interior and drinking tube self-sanitizes by irradiating its inner surface with pulsed UV-C LEDs.
Battery life is not an issue. The PGS sips microwatts from an RTG power cell the size of a sugar cube. Annual replacement is recommended but likely unnecessary. For all practical purposes, it will last forever.
Ms. Strong has expressed amazement at the seamlessness of the eating experience. We’re still working to capture subtleties of taste and consistency, but overall, the PGS has drastically improved her quality of life. Future iterations will introduce auto-refilling and in-silico flavor crafting.
Avatar Creation Process
To facilitate natural interaction in virtual environments, we have crafted a digital avatar that mirrors Ms. Strong’s pre-accident appearance. The process began with collecting 2D archival images used to create a composite.
This raw data was fed into a QC-powered neural network trained on a vast library of human faces. The network’s output was a photorealistic 3D model that accurately reproduces Ms. Strong’s likeness to the finest skin texture and hair details.
Animation was achieved through a novel application of the dimensional-fold interface. By correlating facial muscle movements to specific thought patterns, we enabled the avatar to mimic Ms. Strong’s facial expressions in real time. A pre-linguistic translation layer ensures seamless operation across all supported languages.
The avatar is hosted on a distributed computing platform that leverages the Strange-Matter™ storage medium. This allows for near-instantaneous rendering and projection, even on low-powered devices. Bandwidth optimization is handled by an adaptive compression algorithm that prioritizes crucial facial features.
When connected to online video and audio channels through the loquette, the avatar appears to occupy the same physical space as Ms. Strong.
Informal testing has shown that the avatar effectively conveys Ms. Strong’s personality and emotional state. Colleagues have remarked on the uncanny realism of her digital presence, with some expressing discomfort at the level of detail. We are investigating ways to make the avatar more approachable without sacrificing fidelity.
Potential future enhancements include full-body rendering, dynamic clothing and accessory support, and integration with popular VR/AR platforms. We also explore projecting the avatar onto flexible fabric displays for a more tangible interaction experience.
On a personal note, I must confess to being somewhat enchanted by Ms. Strong’s digital likeness. Her fiery red hair and piercing blue eyes are rendered with a captivating intensity that I find difficult to ignore. I have caught myself staring at her avatar multiple times, lost in thought. While maintaining professionalism in our working relationship, I cannot help but feel a growing attraction to this remarkable woman.
User Feedback and Future Directions
Ms. Strong has adapted to the EI-2 with admirable speed and determination. She reports high satisfaction with the device’s performance, noting that it has “given her back a piece of herself” that she thought was lost forever.
That said, there are still areas where we can improve. The avatar, while impressive, lacks subtle nuances of human expression that are crucial for conveying complex emotions. We are working on a machine-learning algorithm to allow the avatar to generate novel expressions based on Ms. Strong’s unique facial patterns.
Ergonomics is another area of focus. The current loquette design, while functional, can be cumbersome to wear for extended periods. We are experimenting with lighter materials and more compact form factors to improve comfort and wearability.
Perhaps the most exciting development is the NECKLESS-360 (Neuro Collar with Kinetic Locomotion and Environmental Sensing Suite). This experimental device builds on the foundation of the EI-2, adding a host of new features and capabilities.
The NECKLESS-360 will include a full suite of biometric sensors to monitor Ms. Strong’s vital signs and physical state. An array of micro-cameras will provide 360-degree vision, while advanced haptic feedback will allow her to “feel” her surroundings in a way that approximates natural sensation.
The collar will also feature a kinetic locomotion system that will give Ms. Strong unprecedented control over her movements. By translating her thoughts into precise motor commands, the NECKLESS-360 will allow her to navigate her environment gracefully and effortlessly.
We are still in the early stages of development, but initial prototypes have shown great promise. Ms. Strong has expressed keen interest in testing the device as soon as it is ready for human trials.
In conclusion, the EI-2 represents a significant leap forward in assistive technology for individuals with unique physiological needs. It has transformed Ms. Strong’s life in ways that would have been unimaginable just a few short years ago. As we continue to refine and enhance the device, we remain committed to pushing the boundaries of what is possible and helping Ms. Strong reach her full potential.