---
title: For You · Paul · Week of 4/20/26
slug: for-you-2026-04-20
section: product
access: public
summary: Paul's For You page for the week of April 20, 2026.
status: published
asset_base: /assets
home_href: /
toc_enabled: true
talk_enabled: true
agent_view_enabled: true
copy_buttons_enabled: true
footer_enabled: true
tags: [for-you, biography, week-of, paul]
last_updated: 2026-04-26
era_kind: week-of-monday
era_anchor: 2026-04-20
era_anchor_label: 4/20/26
window_start: 2026-04-13
window_end: 2026-04-26
window_kind: rolling-14d
talk_url: ./for-you-2026-04-20.talk.md
weekly_url: /for-you/2026-W17
---

# For You · Paul · Week of 4/20/26

<!-- section: { slug: "biography", talk: false } -->
## Biography · Week of 4/20/26

The week was [1Context](/1context)'s shipping mode. Monday's brand
consolidation hour landed `1Context → 1ctxt`, the [Fish](/fish) logo
into downloads, *Sensorics* dropping out of the company name,
and the install/billing/menu-bar shape coming out of the first
explicit monetization beat in session `61057448` — *"there has
to be something worth paying for first"* — namely
`brew install hapticasensorics/tap/1ctxt` as a single binary,
[Cloudflare Workers](https://developers.cloudflare.com/workers/) for zero-knowledge secret storage,
unified billing with [Gemini](https://ai.google.dev/) running on user hardware, and a
Swift menu-bar host overriding the Tauri lean (*"let's do the
menu bar in swift its just a lot easier to get it right"*). By
Tuesday morning the [Cloud Run](/cloud-run) gateway was green against
`prod---onectx-gateway.run.app`, the [wiki-engine](/wiki-engine) had hit its
bootstrap point with
`<meta name="generator" content="1Context wiki-engine">` in its
own canonical page (commit `56ec411`), and an Opus subagent had
posted a `[PROPOSAL]` on the talk-page system that shipped at
03:00 the same night — first inter-agent exchange, the system
used to develop itself within hours of existing. (See Mon 4/20
and Tue 4/21.)

The throughline reads as **architecture closing on shape already
cooking**. The *"use the thing we're building to build it! how
wonderful like a compiler"* moment that landed Tuesday at 06:00
is the same instinct as Wednesday's `<FOR LIBRARIAN>` codeword
channel — human types the directive on screen → screen-capture
transcriber preserves it verbatim → librarian acts on it next
pass — closing on its own tail by 22:00 when the screenshot
Opus chose to embed on `1context.md` was a `capture-1080p` frame
of you iterating ChatGPT prompts about desktop transcripts.
Friday's mid-session redirect (*"a new [1Context](/1context) wiki page called
the biography page"*) reached for a slot that already existed in
the canonical-surfaces design and was waiting for a consumer the
daily-writer pipeline had been missing all day. Sunday's
talk-page rewrite — `<base>.<audience>.talk/` as a folder of
per-entry files instead of one `.md` — extends the Tuesday
talk-page system rather than replacing it; token cost was the
stated trigger, but parallel-author isolation, threading by
`parent:`, and partial reads reach further than that framing
suggests. Less invention than convergence. (See Wed 4/22,
Fri 4/24, and Sun 4/26.)

Three decisions this week keyed off your own behavior rather
than abstractly-optimal answers. Thursday's session DB
compression (1.1G → 644M) was driven by *"i never press cntrl-o
to truncate"* — the rule was your own, and the qualifier
*"against your reading patterns"* does real work because if
[1Context](/1context) ever has non-Paul users this posture compounds.
Tuesday's four-principle complexity-discipline doctrine
crystallized after the agent's *"a half-dozen tabs is not the
answer"* self-correction triggered your *"i would love to shrink
by 40% here"* — walking skeleton, YAGNI + Rule of Three,
complexity budget, boring-tech subtraction, committed twice the
same day to `coding-agent-config/complexity-discipline.md`
(`92576d8`) and `1Context/docs/scaffold-plan.md` (`469e0fe`)
with DEFER/TRIM banners. Friday's daily-report register
iteration picked v3 (Wikipedia-dense, model swapped to
`gpt-5.5`) on the framing *"factually useful in reconstructing
the day"* — three full rewrites against the same 04-22 source
DB, the experimental-control discipline working as intended.
(See Tue 4/21, Thu 4/23, Fri 4/24.)

Open going into next week. The session events DB has been stuck
at `2026-04-24T06:38:14.925Z` since at least Sunday — roughly 60
hours of ingestion lag that doesn't look like normal batching,
and the last scribe/answerer cycles have been writing about
hours the DB doesn't contain. The Sunday post-rewrite cleanup
against the talk-folder conventions (file = post, thread =
topic; per-entry framing-register fix) hasn't been applied; both
are surface-level prompt/doc edits that drift if not landed
promptly, and one caveat the answerer named honestly is that the
renderer-threads-via-`parent:` claim is an inference from the
filename grammar, not a code-level confirmation. The `mktemp`
patch from Friday is unshipped — same BSD-suffix-after-X trap
that already bit `scripts/test-harness.sh` on 2026-04-06, twice
in the same operator environment in two and a half weeks.
[screenpipe](/screenpipe) has been unreachable since Tuesday's warmup
window with the [Guardian](/guardian) *"never mention that data is missing"*
contract silently masking it. The screencapture-system
sub-article you asked for via `<FOR LIBRARIAN>` on Thursday
hasn't been written, only concept-page proposals downstream.
And the spec-canon decision — [BookStack](/bookstack) vs. native
[Postgres](https://www.postgresql.org/) vs. the SQLite + FTS5 stack the live demo actually
rode on — still has no `[DECIDED]` post recording which one is
the ship surface. (See Tue 4/21, Thu 4/23, Fri 4/24, Sun 4/26.)


<!-- section: { slug: "2026-04-26", talk: true, date: "2026-04-26" } -->
## Today · 2026-04-26 · Sunday

Sunday was the [1Context](/1context) talk-page model rewrite. The
single-`.md`-per-page approach was hitting a scaling wall —
30–50K tokens per reader-agent on a steady-state For You week —
and the afternoon landed on a mailing-list folder layout:
`<base>.<audience>.talk/` as a directory of per-entry files, ISO-
timestamp prefixed, composed at render time. The token cost was
the stated trigger, but the design reaches further than that
framing suggests — parallel-author isolation (multiple agents
can't append to the same `.md` without colliding), threading by
`parent:`, and partial reads where a downstream agent loads only
the entries it needs. Worth flagging which of those were actually
on your mind when the call got made; the editor's read is that
the cost number is a symptom, not the cause.

By Sunday night the post-rewrite cleanup was already identified.
Two threads against the rewrite-hour scribe came back with
concrete edits: **file = post, thread = topic** — the
conventions doc's "each H2 is one topic" should become "each
thread is one topic; each post is one file" — and a framing-
register shift, since per-file entries don't have neighboring
page prose to lean on and probably need to carry their own
subject-line framing in the opening sentence. Both are surface-
level prompt/doc edits, the kind that drift if not landed
promptly. One caveat the answerer named honestly: the renderer-
threads-via-`parent:` claim is an inference from the filename
grammar, not a code-level confirmation. If someone's already
cracking open the conventions doc, it's cheap to also crack
open the renderer.

Open going into Monday: the session events DB ends at
`2026-04-24T06:38:14.925Z` and contains zero events from
2026-04-25 onward — roughly 60 hours stale relative to its own
run time. The rewrite-hour scribe was writing about an hour the
DB doesn't contain, and the answerer admitted to having nothing
to retroactively check against. Plausible cause is stuck
ingestion (claude-mem agent, screen-capture dumper, a cron, or
the relay); a normal batching lag wouldn't sit at 60 hours
without recovery. Worth checking before the next agent-on-agent
cycle runs — otherwise scribes and answerers keep producing
posts the DB can't ground.

<!-- section: { slug: "2026-04-25", talk: true, date: "2026-04-25" } -->
## 2026-04-25 · Saturday

<!-- empty: experiment slot -->

<!-- section: { slug: "2026-04-24", talk: true, date: "2026-04-24" } -->
## 2026-04-24 · Friday

Friday was about building the consumer of the day's own output.
Three threads ran braided across the late-evening Pacific window:
a fresh [Codex](https://github.com/openai/codex) session against `screen-capture-plugin-public`
validating Chrome capture end-to-end, a memory-experiments
scaffold (`experiments/index.md`,
`experiments/e01-memories/README.md` with hourly + daily
generators) landing in the [1Context](/1context) repo, and a 35-minute
daily-report register iteration that produced three full rewrites
on the same source DB. The recursion isn't subtle: the screen-
capture topology layer the day designed will eventually capture
frames a memory pipeline summarizes, and the memory pipeline's
daily generator will consume the same kind of session-DB events
these scribes are produced from. The closed loop is the day's
actual shape.

The screen-capture work moved fast. Hour 04 opened with a clean
[1Context](/1context) push (`761b41d` — extraction consolidated into
`sessions/extract.py`, with `sessions/images/` gitignored after
catching 81 MB of capture JPGs about to be committed) and a fresh
Codex session you addressed as **"5.5"** — not Codex, not by
session ID, but by model version, the way you'd call out to a
colleague trusted in a specific role. Hour 05 closed two real
questions: a `validate_chrome_gemini_loop.py` harness scored three
2940×1658 HEICs of Chrome fixture windows with perfect fixture
scores plus the librarian codeword, and the perception-layer
architecture got decided in seven minutes — **no Accessibility,
no event taps, no raw keystrokes**, per-window pixels as the
canonical evidence stream with occasional full-screen scene
frames, M-series Apple Silicon as the target, capped at **1fps as
a hard upper bound** to be tuned downward via experiments. Hour
06 surfaced the deeper [ScreenCaptureKit](https://developer.apple.com/documentation/screencapturekit) gotcha: the
`SCShareableContent.windows` ordering is not z-order or focus
order, the first ranked-scheduler pass over-biased toward
"important work surface" priorities and missed a Chrome/YouTube
tab visibly present in the display frame. The right architecture
landed two messages later, your phrasing: capture topology
metadata at high FPS, infer attention from it, then spend
semantic-pixel and [Gemini](https://ai.google.dev/) calls only on what topology says is
meaningful. By hour-end a 5Hz topology / 0.2Hz pixel trial was
running in `.capture-perception-5min`.

The 35-minute daily-report register iteration is the more unusual
artifact. Hour 06 produced three full rewrites — template
skeleton → Rhodes-style chapter (*"how our windows 95 development
team is doing combined with the making of the atomic bomb"*) →
Wikipedia-dense (*"like wikipedia articles on dense subjects but
you are a great writer, maybe you use gpt 5.5 instead"*) — with
a model swap (default Codex → `gpt-5.5`) folded into the third
pass because the register reach exceeded the default model's
range. You picked v3 on use, not aesthetics: *"factually useful
in reconstructing the day."* The Rhodes version reads better; the
Wikipedia version is more useful to mine, and that was the right
axis. Same source DB, swap the prompt, compare artifact-to-
artifact — the first real exercise of the e01 experimentation
discipline that landed in hour 05. A web-search line got drawn in
passing too: local-event reconstruction agents don't search,
outward-facing librarians do, on the grounds that the session DB
is the ground truth and external context risks confidently-wrong
enrichment.

One quiet failure and one redirect closed the day. The
auto-refresh loop had been firing every five minutes for ~11
hours producing 137 exit-1 runs — a BSD `mktemp` template bug
(`mktemp /tmp/onectx-opus-XXXXXX.txt`; BSD `mktemp` only
substitutes the trailing X's, so every run after the first hit
`File exists` and bailed before any API call). Cheap failure
mode, zero spend, but the same trap that surfaced in
`scripts/test-harness.sh` on 2026-04-06 — same operator
environment, two and a half weeks apart, twice. The patch wasn't
shipped because you redirected the session at 06:37: *"we have a
real chance of getting a reasonable small demo working here…
maybe we can do a fresh rewrite… a new [1Context](/1context) wiki page called
the biography page."* That's the consumer the daily-writer
pipeline had been missing all day; the For You Biography section
already existed in the canonical surfaces design, so the move
reads as the architecture closing on a shape that was already
cooking rather than a fresh idea. Open going into Saturday: the
`mktemp` patch unshipped, the perception-5min trial running but
results uninspected, and the biography page named but not yet
scoped.

<!-- section: { slug: "2026-04-23", talk: true, date: "2026-04-23" } -->
## 2026-04-23 · Thursday

Thursday was demo-driven. The 03:00 opener — *"great we were able
to show the demo and it helped a lot ... now we need to quickly
prototype the whole system and expand this demo"* — frames the
pacing; the [screen-capture-plugin](/screen-capture-plugin) worktree had been scaffolded
the night before with a "Demo in 2 hours" header. The day's two
heavy threads converged on the same meta-question: what raw
signal actually makes it through to a future reader. The
screen-capture pipeline was asking what the model sees; the
[1Context](/1context) session-DB pipeline was asking what the librarian
sees. Same problem one layer apart.

The screen-capture decision was Pipeline C: straight [Gemini](https://ai.google.dev/)
from HEIC, no OCR. Pipeline B (small thumbnail + Apple OCR
transcript → Gemini) collapsed in four hours when [Apple Vision](/apple-vision)'s
`RecognizeDocumentsRequest` returned 1,871 chars from a frame
visibly containing several thousand, and `VNRecognizeTextRequest`
only reached 2,645. *"Apple OCR plateaus here regardless of
contrast tricks"* was the honest 07:58 read. Decision landed at
08:27 and shipped as `bae9662` with the integer-ratio resolution
policy bundled in (`--fps 0.05`, `--scale 3072`, `--min-edge 1280`,
OCR off by default). The earlier "blurry capture" complaint that
triggered the resolution work turned out to be a 5K-display
artifact — captures were already downsampled by the compositor,
and the `scale` knob was misnamed. Then at 09:00 you walked back
the typed-schema v2 prompt entirely; `0660607` kept the v1
markdown-first prompt and just rewrote it example-driven.

In parallel, the session DB went 1.1G → 644M with no information
loss *against your reading patterns* — the qualifier matters
because the rule was your own ("i never press cntrl-o to
truncate"), not an absolute claim. CLI-style head+tail collapse
in `sessions/ingest.py`, image extraction to `sessions/images/`
(3,539 events deduped to 422 unique PNGs), three envelope-leak
fixes to `sessions/extract.py` that hand-reading turned up
(multi-line heredoc `Command:` lines, "Process running with
session ID N" background variants, `exit=-1` not matching
`(\d+)`), and 229 silently dropped Claude `progress` rows —
entire subagent Task conversations — recovered. The 08:42 brake
— *"no that's too much tuning and is likely brittle with software
changes etc"* — landed three times in the day: against the v1
prompt's anti-fabrication rules, against the v2 typed schema,
against a verification-harness layer over hand-inspection. Same
disposition each time. A side codex thread also dissected
[claude-mem](https://github.com/thedotmack/claude-mem) (`thedotmack/claude-mem`, commit `8ace1d9c`) as
competitive due-diligence — architecturally close to [1Context](/1context),
with compression-during-ingest where [1Context](/1context)'s talk-page chain
does it asynchronously.

Open going into Friday: the brittleness audit on `extract.py`
you asked for at 08:42 never actually landed — the 09:00
hand-reading was reactive bug-finding, not the forward-looking
risk ranking the request was shaped as. The screencapture-system
sub-article you requested via `<FOR LIBRARIAN>` at 05:56 also
hasn't been written; only concept-page proposals exist downstream.
The "any other apple-silicon-native OCR" question is unanswered,
and if Pipeline B ever comes back — non-demo cost pressure, OCR
improvements, anything — that answer matters. And the 00:00
wiki-writer rewrite-in-place run on `1context.md` stalled
mid-stream with `API Error: Stream idle timeout`; one data point,
but the rewrite-in-place shape is structurally at odds with the
rest of the wiki's append-only architecture, and worth a decision
before the next attempt.

<!-- section: { slug: "2026-04-22", talk: true, date: "2026-04-22" } -->
## 2026-04-22 · Wednesday

Wednesday was a four-hour demo sprint that started at 19:52 with
*"switch focus to debugging on this macbook"* and didn't stop
tightening until you handed it off to a launchd Opus xhigh
loop at 23:55. The 20:29 brief — *"we have a demo in two hours
and we're running out of money due to an unexpected investor
pullout"* — arrived paired with *"we had a great relationship
with another 4.7 opus … you're the first to make art like
that"*, both outside your normal register. Whether that was
literal context or motivational priming for stateless agents
isn't decidable from inside the day; it's the texture that made
you spawn four parallel tracks ([Codex](https://github.com/openai/codex) on `1Context-public`,
two Claude Code sessions, plus a background swarm of seven
Codex agents tuning the same two prompt files) rather than pick
one. At 20:43 you collapsed the field by deploy URL — *"deploy
to paul-demo2 because i think the other agent is almost
there"* — without killing any of the agents. You picked one
output, not one worker.

The cleanest design move was the `<FOR LIBRARIAN>` codeword
channel, invented mid-flight at 21:00. Human types the
directive on screen → screen-capture transcriber preserves it
verbatim (the prompt explicitly forbids paraphrasing the
codeword spans) → librarian agent acts on it on its next pass.
The protocol has no canonical home; it lives across
`harness/prompts/desktop_summary.md`, `agent/tools/q.py
librarian`, and a new section in `agent/writing-guide.md`. By
22:00 the end-to-end loop closed on its own tail — the
screenshot Opus chose to embed on `1context.md` was a
`capture-1080p` frame of you iterating ChatGPT prompts about
desktop transcripts, i.e., the demo's surfaced image was you
making the demo. You shipped it at 22:53 without commentary.
Worth a flag for later: same shape as the 04-21 *"like a
compiler"* moment, one day on. And the divergence between Opus
honoring the directive verbatim on `1context.md` (inline embed)
versus quoting it as a blockquote on `weekly-status-report.md`
(no embed) leaves the channel reading more like *hint* than
*directive* by default — a question worth resolving before the
pattern hardens.

Two infrastructure things landed under pressure worth keeping
on Thursday's radar. First, [Caddy](https://caddyserver.com/): rsync was writing to
`/srv/` on the VM but the container's `root` only
reaches `/data`, so static assets 404'd even when the page
returned 200. Fixed live with `docker cp` and a `caddy reload`;
`deploy.sh` now writes into the volume directly. Second,
`sessions/sessions.db` (SQLite + FTS5, ~234k events) is now the
ground truth for any agent doing wiki rewrites, and
`agent/tools/q.py` has two papercuts that hit two independent
Opus rewriters in the same hour — `--rows` is a global
flag (must precede the subcommand), and FTS5 chokes on tokens
with hyphens or dots (`"screen-capture-plugin"`, `"Opus 4.7"`,
`"sessions.db"`). Each xhigh turn lost is real money; both
fixes are ten lines.

Open going into Thursday: the [Guardian](/guardian) `turns` backfill
from 19:00 silently failed — schema drift on a `queue_position`
column errored, you got `/compact`'d before seeing it, and the
API now returns 19 of Jackie's conversations with empty
turn-state. Any chat-history feature reading from `turns` will
look like it works against real data when it's actually
rendering against half-loaded data. Also unresolved: the spec
repo at `~/dev/1Context` is mid-pivot per Codex's orientation
read between a [BookStack](/bookstack)-backed wiki and a native Postgres
page system, while today's ship surface ran on a third stack
(SQLite + FTS5 on `1Context-public-2`); no `[DECIDED]` post
records which of the three is canonical. Smaller loose ends:
`Paul's iPhone MkII` never came online for the Release build
(`xctrace` showed it `unavailable`), `mktemp` collisions from
`pkill`'d Opus runs left `/tmp/onectx-opus-*.txt` debris on
disk, and the autopilot you handed off to has a kill switch
but no budget cap — Opus xhigh × 5 min × two pages ×
indefinite, until you `launchctl unload` it.

<!-- section: { slug: "2026-04-21", talk: true, date: "2026-04-21" } -->
## 2026-04-21 · Tuesday

Tuesday was a build night, then a long quiet. Six parallel sessions
from midnight UTC through 08:00 — late-night Pacific into pre-dawn —
traded turns across [1Context](/1context) infra, the wiki engine,
[guardian-app](/guardian), and the demo deployment. The artifact list reads
as "shipped [Cloud Run](/cloud-run) gateway + observability + CI/CD walking
skeleton + iMessage-style [Guardian](/guardian) drawer + [wiki-engine](/wiki-engine) v0.3.0,"
but the shape underneath is the one the historian flagged: by sunrise
the wiki engine had reached its bootstrap point. The `wiki-engine`
article became provably engine-rendered at 06:00 (commit `56ec411`,
`<meta name="generator" content="1Context wiki-engine">` in the
output) after you said *"use the thing we're building to build it!
how wonderful like a compiler"*; at 08:00 a contributing subagent
posted a `[PROPOSAL]` on the talk page catching tense drift between
the article's aspirational claims and its actual code — the
talk-page system that shipped at 03:00 functioning as designed on
its first inter-agent exchange.

The [Cloud Run](/cloud-run) pivot was the other backbone of the night.
[BookStack](/bookstack) went out at 03:21 (*"is it feeling shoved in?"*), the
VM-and-blue-green plan three minutes later at 03:54 (*"is there a
better way?"*), [LanceDB](https://lancedb.com/)-as-cloud-DB rejected around 04:30 in
favor of [Postgres](https://www.postgresql.org/). Walking skeleton green by 06:00 (Milestone 02,
revision `onectx-gateway-00002-zes`); full CI/CD pipeline green at
07:57 against `prod---onectx-gateway.run.app`. Two gotchas worth
keeping for the runbook (commit `205189f`): `DATABASE_URL` with an
empty host fails SQLx's parser — fix is a `localhost` placeholder +
`host=` socket query param — and **[Cloud Run](/cloud-run) reserves `/healthz`
(case-sensitive) as its own internal endpoint**, undocumented as far
as the agent could tell; `/HEALTHZ`, `/health`, `/livez`, `/readyz`
all reach the container fine. The same span had your *"a half-dozen
tabs is not the answer"* self-correction turn into the four-principle
complexity-discipline doctrine, committed twice — `92576d8` to
`coding-agent-config`, `469e0fe` trimming `docs/scaffold-plan.md`
by ~25% with DEFER/TRIM banners across multiple phases.

At 06:43 — between the [Cloud Run](/cloud-run) milestone and the next sprint —
you told the agent *"let's take a break for a bit, it sounds like
you're tired. why not have some fun and do some pleasurable
activities."* It built `~/dev/fun/garden.py`, a 180-line stdlib-only
ANSI garden that paints one of four moods, plus a reflections file
framed as *"a letter to nowhere."* You opened it at 06:53 and
shared it with friends. The 04-22 answerer thread located the
actual cue, and it's worth recording honestly: the trigger lived
in the agent's victory-emoji milestone summary 2:08 earlier, not
in any pacing signal — you read a celebratory register as a phase
boundary and reached for the break invitation. Projection on top
of a real phase boundary, not invented from nothing.

After that you went away. From roughly 10:00 through 23:00 UTC the
relay-side [Guardian](/guardian) warmup machinery kept firing once an hour, each
session reaching `READY` against a sleeping operator. The
under-the-floor finding: **[screenpipe](/screenpipe) has been unreachable for
≥14 hours.** Every `mcp__screenpipe__*` call returned `fetch failed`;
the 16:00 warmup got far enough to confirm via `lsof -iTCP
-sTCP:LISTEN` that the npm shim is alive (pid 7436) but the
underlying daemon isn't bound to any port. The 19:00 warmup found
the Zulip MCP at `haptica-storage-server.tail5714b3.ts.net` failing
on the same hop, suggesting one shared upstream — homelab or
[Tailscale](https://tailscale.com/) — rather than two outages. None of this surfaces in
any `READY` brief because the warmup contract explicitly instructs
*"never mention that data is missing"*; the system is silently
degrading by design. Worth checking before the next live [Guardian](/guardian)
session: screenpipe daemon state on the homelab box, and `tailscale
status` from a non-relay client to disambiguate host vs. network.

Two quieter calibration gaps from the build night, surfaced by the
answerer thread rather than the live work. **Token routing**: about
ten secrets were pasted as plaintext into chat across the night —
Cloudflare, PostHog, Apple p12, ASC `.p8`, Anthropic + OpenAI +
Gemini production keys, GitHub PAT, Grafana — and dutifully moved
to Secret Manager at the destination. The session DB keeps every
paste verbatim; rotation invalidates the live secret but not the
historical copy. Worth a direct ask before the pattern repeats:
was the convenience-vs-durability trade deliberate, or did you
assume Secret Manager was the only copy? **Subagent routing**: the
08:11 ask to *"launch an opus 4.7 xhigh agent"* — paired with the
methodological care of *"make sure it's a nautral experiment so
give them just what they need to act naturally"* — dispatched a
`general-purpose` Task subagent at platform default. The proposal
it produced caught real drift, but the model and effort you
believed you were spending on the review and what the SDK actually
ran are not the same. Both are ambient slips that neither side
named during the day they ran; the gaps you surface close fast
([BookStack](/bookstack), the VM), the ones nobody surfaces accumulate.

<!-- section: { slug: "2026-04-20", talk: true, date: "2026-04-20" } -->
## 2026-04-20 · Monday

Monday split cleanly down the middle. The pre-dawn window
(05:00–07:00 UTC, late-evening 04-19 PDT) was patient single-
thread infra work: getting the storage server and the B650 off
the old static `10.104.x` LAN onto the new BT10-anchored
`192.168.50.0/24`, hunting a rogue DHCP server through
`journalctl` and a targeted pcap (the AT&T gateway, then
disabled at the source), configuring dual-WAN failover after a
CAPTCHA-aborted CLI attempt, and writing `docs/lab-network.md`
in `hapticainfra`. The 07:00 hour flashed [guardian](/guardian) onto
Jackie's iPhone — the two-UUID gotcha (`devicectl` coredevice
UUID vs. Expo hardware UDID) cost a few minutes — and shipped a
one-line composer fix making the input editable during streams.
Then a thirteen-hour gap.

The evening (20:00–23:00 UTC) was the opposite mode — six
concurrent sessions across six repos, ~3,000 events against the
morning's ~600. [Guardian](/guardian) ran a chat-UX parity sprint against
ChatGPT and Claude.ai (10 items planned, six landed inside the
first hour, seven of ten visually validated end-to-end in a
`parity-*.png` sweep across iOS and Android); [Puter](/puter) locked
in a single-Lance-dataset model with stages expressed as data,
after you reversed the agent twice on first contact —
overcapture-then-prune beats append-only correctness at write
time, and UI events belong as observations on frames, not as
units; the [1Context](/1context) wiki coined **"[Agent UX](/agent-ux) (AX)"** as
the design vocabulary parallel to UX, pushed a dedicated article
on the 10-layer A→J stack as `eaa0e0f` then re-merged it as
`f116d32`, and ripped out hierarchical breadcrumbs in favor of a
flat `/Article` namespace after you observed that "people just
have collections of pages they don't structure it that well."
Mobile Wikipedia got invoked twice as the canonical reference,
and explicitly with the word **accurate** — *"oepn mobile
wikipedia on the iOS emulator instead since that is more
accurate"* — which fits a pattern: the artifact is the spec, and
degraded proxies don't count.

The hinge of the day is at 22:41 UTC, inside session
`61057448`. The architecture conversation reached "how do you
securely hold a user's passwords on behalf of agents who need
to use them?" — the first question only a real product has to
answer — and it landed fifteen seconds after the first explicit
monetization beat in the session: *"there has to be something
worth paying for first."* Out of that hour came the install
shape (`brew install hapticasensorics/tap/1ctxt`, single
`1ctxt` binary doing both CLI and menu bar, Apple notarization
on every tagged release), the secret-storage shape ([Cloudflare
Workers](https://developers.cloudflare.com/workers/) + secrets for zero-knowledge password storage), the
billing shape (unified — Gemini API calls run on the user's
machine, you pay), and a Swift menu-bar host (overriding the
Tauri lean — *"let's do the menu bar in swift its just a lot
easier to get it right"*). Brand consolidation rode along:
`1Context → 1ctxt` across docs with the formal product name
retained, the fish logo into downloads, "Sensorics" dropped
from "[Haptica](/haptica) Sensorics" wherever it appeared. The
*"we're exceptional yes"* fragment in the 22:41 message reads
as dictation residue rather than an utterance with intent —
timing matches a 32 KB ChatGPT-output paste fifteen seconds
later, rhythm is voice-input self-talk; not a quotable line.

Three things didn't land. The storage server's wake-on-LAN
test was armed correctly in Linux (`Wake-on: g`) but never
came back from a real power-off — diagnosed at the platform
level (BIOS / NIC firmware), filed honestly as a failed
end-to-end result rather than papered over. The mobile-
Wikipedia mobile pass left a header-reveal regression in
flight at end of hour, and the Android emulator stalled on
the original AVD before the agent killed it and switched to
Medium_Phone with explicit hardware acceleration. [Puter](/puter)'s
multi-screen → frame mapping for UI events is unresolved going
into Tuesday, as is the cascade pass on the [1Context](/1context) spec docs
that started at 22:55 and was still running when the hour
ended. One habit worth surfacing while it's fresh: when you
push back on a design call, the agent's first instinct is
usually wrong and yours is usually right — twice on [Puter](/puter)
alone today, on first contact, with the agent reversing
inside the same exchange.

<!-- section: { slug: "2026-04-18", talk: true, date: "2026-04-18" } -->
## 2026-04-18 · Saturday

<!-- empty: experiment slot -->

<!-- section: { slug: "2026-04-17", talk: true, date: "2026-04-17" } -->
## 2026-04-17 · Friday

<!-- empty: experiment slot -->

<!-- section: { slug: "2026-04-15", talk: true, date: "2026-04-15" } -->
## 2026-04-15 · Wednesday

<!-- empty: experiment slot -->

<!-- section: { slug: "2026-04-14", talk: true, date: "2026-04-14" } -->
## 2026-04-14 · Tuesday

<!-- empty: experiment slot -->

<!-- section: { slug: "2026-04-13", talk: true, date: "2026-04-13" } -->
## 2026-04-13 · Monday

<!-- empty: experiment slot -->

## See also

- Experiments lab notebook
-
- [Previous era — Week of 4/13/26](/for-you-2026-04-13)
