<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <id>https://shemol.tech/feed.en-US.xml</id>
    <title>Shemol's Blog</title>
    <updated>2026-04-22T02:38:03.959Z</updated>
    <generator>https://github.com/jpmonette/feed</generator>
    <author>
        <name>Shemol</name>
        <email>shemol106@gmail.com</email>
        <uri>https://shemol.tech</uri>
    </author>
    <link rel="alternate" href="https://shemol.tech/"/>
    <link rel="self" href="https://shemol.tech/feed.en-US.xml"/>
    <subtitle>Sneak through holes and climb over fences.</subtitle>
    <icon>https://shemol.tech/favicon.svg</icon>
    <rights>All rights reserved 2026, Shemol</rights>
    <entry>
        <title type="html"><![CDATA[Memory Management in Claude Code]]></title>
        <id>https://shemol.tech/claude-code-memory-management-en</id>
        <link href="https://shemol.tech/claude-code-memory-management-en"/>
        <updated>2026-02-27T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[An overview of Claude Code’s native memory (Auto memory, CLAUDE.md), layered structure, and on-demand loading.]]></summary>
        <content type="html"><![CDATA[<h1>Memory Management in Claude Code</h1>
<p>Starting a new thread on agent memory—beginning with Claude Code’s built-in memory management.</p>
<p>Native memory in CC: <a href="https://code.claude.com/docs/en/memory#auto-memory">https://code.claude.com/docs/en/memory#auto-memory</a></p>
<p>After reading it, my gut feeling is that if I described building “this kind of system” in an interview, I would never pass.</p>
<p>CC’s native memory is file-system based.</p>
<p>There are two kinds of cross-session persistent memory:</p>
<li><strong>Auto memory</strong>: Claude automatically saves useful context—project patterns, key commands, preferences. It persists across sessions.</li>
<li><strong>CLAUDE.md files</strong>: Markdown you write and maintain, with instructions, rules, and preferences for Claude to follow.</li>
<p>Both are loaded into Claude’s context at the start of each session, but auto memory only loads the <strong>first 200 lines</strong> of its main file.</p>
<p>I don’t think truncation is a great approach…</p>
<strong>Layered memory in CC</strong>
<p>Organization-wide (all users in the org):</p>
<li><strong>Managed policy</strong>: IT/DevOps-managed instructions org-wide—coding standards, security, compliance.</li>
<p>Team / project-wide (everyone on the project):</p>
<li><strong>Project memory</strong>: Shared team instructions—architecture, coding standards, common workflows.</li>
<li><strong>Project rules</strong>: Modular, topic-specific project rules—language guides, testing norms, API standards.</li>
<p>Personal:</p>
<li><strong>Project memory</strong>: Personal preferences for a project—sandbox URLs, favorite test data.</li>
<li><strong>Auto memory</strong>: Project patterns, debugging insights, architecture notes.</li>
<p>What auto memory covers:</p>
<li>Project patterns: build commands, testing conventions, style preferences.</li>
<li>Debugging insights: fixes for tricky issues, common failure modes.</li>
<li>Architecture notes: key files, module relationships, important abstractions.</li>
<li>Personal preferences: communication style, workflow habits, tool choices.</li>
<p>I won’t list storage locations—you can read the docs.</p>
<li>As mentioned, <code>MEMORY.md</code> only loads the first 200 lines, and Claude Code is instructed to stay concise and move detailed topics into separate topic files.</li>
<li><strong>On-demand reads</strong>: Files like <code>debugging.md</code> or <code>patterns.md</code> are <strong>not</strong> loaded at startup. When Claude needs them, it reads them with its normal file tools.</li>
<li>Claude reads and writes memory files during a session, so you’ll see memory update as you work.</li>
<p>Claude Code reads memory <strong>recursively</strong>: from the current working directory it walks <strong>up</strong> toward <code>/</code> (but <strong>not</strong> including <code>/</code>), loading any <code>CLAUDE.md</code> or <code>CLAUDE.local.md</code> it finds. That helps in large repos—e.g. you run CC under <code>foo/bar/</code> while <code>foo/CLAUDE.md</code> and <code>foo/bar/CLAUDE.md</code> both exist.</p>
<p>It also <strong>discovers</strong> <code>CLAUDE.md</code> files under the current directory subtree. Those are <strong>not</strong> loaded at startup; they enter context only when Claude reads files in those subtrees.</p>
<p>You can load memory from other directories, edit memory, and use modular rules—I’ll skip the rest.</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Chen Hao (325)]]></title>
        <id>https://shemol.tech/about-chen-hao-325-en</id>
        <link href="https://shemol.tech/about-chen-hao-325-en"/>
        <updated>2026-02-17T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Something I saw on Zhihu about Chen Hao (aka “Haozi” / left ear).]]></summary>
        <content type="html"><![CDATA[<h1>Chen Hao and the “3.25” story</h1>
<p>I came across this while organizing notes and wanted to keep it on the blog.</p>
<strong>Author:</strong> Anonymous user  
<strong>Link:</strong> https://www.zhihu.com/question/29614511/answer/45025842  
<strong>Source:</strong> Zhihu  
<strong>Rights:</strong> © the author. For commercial reprint, contact the author for permission. For non‑commercial use, please cite the source.
<p>This has to be anonymous. I’m conflict‑of‑interest close to people on Haozi’s team and know a few things.</p>
<p>Roughly: Alibaba Cloud ECS had a project called VPC; it had been going about a year and still wasn’t launched. From the start it was on the wrong track—I heard Haozi argued early with ECS that their technical approach was wrong, but the project was huge—maybe 30–40 people across many teams—and Haozi couldn’t steer it. In my friend’s words, “there are too many gods at Alibaba Cloud.”</p>
<p>From day one the overtime was brutal—Monday through Sunday, until 2–3 a.m., for three to four months. Hard to believe.</p>
<p>My friend was on that project too; we complained daily about silly technical mistakes—some so bad only a non‑technical person would make them.</p>
<p>Haozi couldn’t control the project, so he wouldn’t let his own people work those hours—he thought the crude errors came from overtime, and everyone knew he opposed excessive OT. I heard he half‑joked to the team: if you work past 8 p.m., that’s a 3.25 performance rating and a “C” on values. (I think he was mocking people chasing KPIs at any cost.)</p>
<p>The project still failed—apparently major rework even now. After three months it was all bugs and couldn’t go live; leadership got involved and heads rolled. The lead told the big boss part of the reason was Haozi’s team “not pulling their weight”—they wouldn’t work until dawn. The next day the boss tried to transfer my friend and others on Haozi’s team to the team that stayed until dawn every night. They talked all day; nobody wanted to go.</p>
<p>In reality? The two people from Haozi’s team finished their modules on time without that overtime, and their bug count was a small fraction of the total.</p>
<p>The outcome? The boss forced a decision: they didn’t have to move, but work would be assigned by the other side—Haozi was effectively sidelined, and his team was gone in practice.</p>
<p>After Haozi criticized that winter‑layoff PR piece on Weibo, company PR got involved; his new boss reassigned his whole team without Haozi or my friend knowing. Alibaba’s management can be pretty rough.</p>
<p>That’s probably the kind of “values” persecution Haozi hinted at on Weibo.</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[2026.1.31-en]]></title>
        <id>https://shemol.tech/2026-1-31-en</id>
        <link href="https://shemol.tech/2026-1-31-en"/>
        <updated>2026-01-31T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Thoughts and events from the last couple of weeks.]]></summary>
        <content type="html"><![CDATA[<h1>2026.1.31</h1>
<p>I meant to write after the lab dinner Wednesday before break, kept delaying, then figured I’d wait until after the paper submission—finally today, dorm packed, I have time.</p>
<p>Not sure what to write; felt busy but unclear on <em>what</em> was busy. Treat this as consolidating what I learned and sketching a plan.</p>
<strong>Baseline:</strong> AI is a capability multiplier, so we should spend more time learning and stockpiling ideas. Main thread stays learning—deeper, more detailed. Internships/jobs stress me out, but part of me thinks they’re not <em>that</em> important—they’re external scorecards. Joining a company means shipping <em>their</em> product and <em>their</em> ideas. Why not try building your own—anything, good or bad. Walking in with a portfolio might help recruiting too.
<h1>Full stack (for now: front end)</h1>
<p>Learning front end—TypeScript, React, small projects. Ideas not started yet; that’ll cost time, revisiting basics, and probably interview cram next term. End goal is full stack: web → app → backend/DB, step by step.</p>
<h1>Agents</h1>
<p>My agent knowledge still feels shallow—want to go deeper: Bojie’s courses/streams, LangChain-style frameworks, SDKs, APIs, hooks, skills, keep up with new tooling.</p>
<p>Including “frontend skills” and similar drops from seniors.</p>
<h2>Memory</h2>
<p>Including memory as a subtopic—it matters. I used to think “filesystem or RAG” and that was shallow. Internship + Bai Ting’s memory survey paper + many memory products showed there’s more to unpack.</p>
<h1>Paper</h1>
<p>Jan 29 ICML deadline—recently my senior and I cranked on the paper. I drew lots of figures/tables and got a lot of coaching on charts, tables, layout, prose—learned tons. All-nighter together—memorable. I need to internalize how he writes; next term more papers for graduation, but that can wait—move from “memory” to “disk” for now.</p>
<h1>Internship</h1>
<p>After the 8:00 deadline I printed internship paperwork; next day (the 30th, yesterday) I did a one-day trial at a memory startup—met cool people, learned a lot, saw products and their toolchain. For my own reasons I’m not going back next week—focus on my own stuff. Summer internship, day-to-day internship—I’m parking both and trying to build product first.</p>
<p>Internships feel like external validation loops—everyone says daily internships matter, summer ties to return offers—but the core is whether <em>you</em> can build and ship. OSS contributions too—pause for now; try <em>your</em> product. No time left—start creating. I want to apply with a portfolio. Fits the trend of AI shrinking the junior bench. Fundamentals still have to be solid.</p>
<h1>Be open</h1>
<p>I’ve been trying to be more open and talk to more people—working with my senior on the paper, even that one-day stint, reinforced it: interact more, assume less.</p>
<p>Trying different ways to connect. I want to shadow Kubernetes release team—prep starts now.</p>
<p>Also train expression—voice input (Typeless, Autotyper, etc.)—maybe it helps with stuttering.</p>
<h1>Exercise</h1>
<p>This morning I did squats—building doable habits matters. My girlfriend jokes “after 25 men are 65…”; my senior hammers exercise too—take it seriously.</p>
<h1>In the end</h1>
<p>That’s mostly it. Periodic summaries are nice—I type fast; lots of fluff.</p>
<p>Using Things for todos/project management—hope it sticks. Telegram for links to articles I’ve read.</p>
<p>Later I might organize notes on agent memory products.</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[2026.1.4 - Agent-en]]></title>
        <id>https://shemol.tech/2026-1-4-agent-en</id>
        <link href="https://shemol.tech/2026-1-4-agent-en"/>
        <updated>2026-01-05T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Brushing up on agents.]]></summary>
        <content type="html"><![CDATA[<h1>2026.1.4 — Agent</h1>
<p>Trying to recreate the Cursor homepage left me frustrated, so I pivoted back to agents.  </p>
<a href="https://01.me/2025/12/silicon-valley-ai-insights-2025/">https://01.me/2025/12/silicon-valley-ai-insights-2025/</a>
<p>The “big-company day-to-day dev” bit under AI coding effects is almost funny—so little time actually writing code (~15%). Not how I want to work.</p>
<p>Research code is fine for agents and scripts.</p>
<p>Infrastructure code—Linux kernel, consensus protocols—not really there yet.</p>
<strong>Vibe coding</strong> best practice: split work; generate as little code per step as possible.
<strong>TDD</strong> honestly feels more reliable than “Ralph-style” dev to me…
<strong>Large refactors</strong> need a solid spec—I’ve even seen papers that write a Linux filesystem from spec… curious how well that works.
<p>A <strong>strict eval system</strong> is also a way to accumulate code data—everyone knows data matters now; each company builds datasets.</p>
<p>The giant Silicon Valley dynamics were eye-opening.</p>
<strong>Startup takeaways</strong> were useful too—you need a real niche; don’t compete in generic lanes the big players will own. Find a narrow vertical.
<p>You can’t skip engineering—only by doing it do you get a true feel for the work. Vibe coding, training models, whatever: don’t trust hearsay; try it yourself.</p>
<h1>Technical practice</h1>
<h2>Context engineering stack</h2>
<li>System prompt  </li>
<li>Tools  </li>
<li>Data retrieval  </li>
<li>Long-horizon optimizations  </li>
<strong>Data retrieval</strong> paradigm shift: the new pattern is <strong>just-in-time</strong> loading:
<li>Strategy 1: lightweight identifiers  </li>
<li>Progressive disclosure  </li>
<li>Autonomous exploration  </li>
<p>All models degrade on long context. When you exceed the window:</p>
<li>Context compression  </li>
<li>Agents maintain <strong>explicit memory artifacts</strong>—working notes of decisions, learning, state—retrieved on demand instead of stuffed into context  </li>
<li><strong>Sub-agents</strong>: decompose into specialists with narrow, clear context; the main agent orchestrates and synthesizes  </li>
<strong>How skills work</strong>
<p>Claude can discover and load them dynamically.</p>
<p>``<code>markdown</p>
<p>pdf/SKILL.md (main file)</p>
<p>├── YAML Frontmatter (name, description)</p>
<p>├── Overview</p>
<p>└── References: "For advanced features, see /reference.md"</p>
<p>pdf/reference.md (deep reference)</p>
<p>└── Advanced PDF processing features...</p>
<p>pdf/forms.md (specialized)</p>
<p>└── PDF form filling instructions...</p>
</code>``
<li>Memory  </li>
<li>Sub-agents & collaboration  </li>
<li>Dynamic tool calls  </li>
<li>Code generation & execution  </li>
<li>Web search  </li>
<li>Agentic search  </li>
<li>Long-running tasks  </li>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[年终总结-2025-en]]></title>
        <id>https://shemol.tech/year-review-2025-en</id>
        <link href="https://shemol.tech/year-review-2025-en"/>
        <updated>2026-01-01T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Personal notes wrapping up 2025.]]></summary>
        <content type="html"><![CDATA[<h1>Year in review — 2025</h1>
<p>Another year down. I meant to write this on December 31, but I had two interviews morning and afternoon, then went to my partner’s place, had dinner with her family, met friends for New Year’s Eve—no time left. Sitting down today I’m getting it on paper.</p>
<p>On the Fediverse I found my early-2025 posts and rewound to January. Late 2024 I binged a lot of anime and manga—<em>Cowboy Bebop</em>, rewatching <em>Evangelion</em>, <em>Fire Punch</em>, rereading <em>Chainsaw Man</em>, rereading <em>Look Back</em>. Early 2025 I kept watching films and shows.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867598746.png" alt="" />
<p>2025 was my zodiac year; my family wanted a ceremony to “appease Tai Sui” and gave me an amulet. I wore it less than two weeks before it lived in my dorm. I don’t believe in it, and I didn’t like the fortune-teller. Looking back, I don’t think “fate” delivered surprise joys or tragedies—almost everything has traceable causes if you pay attention. Keep observing; keep connecting the dots.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867599669.png" alt="" />
<p>Tried to do things with my own hands—failed a bunch, lol.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867600558.png" alt="" />
<p>Some dreams.</p>
<p>First time trying cosplay in January with my roommate—I was Shinji, he was Kaworu. Exhausting but fun day; an Asuka asked for a photo.</p>
<p>After we swapped contacts I learned she goes by “Teacher Ansa.”</p>
<p>I also went to a con on the 17th—tickets bought ages ago, felt wasteful to skip, and I wanted to see Teacher Ansa again (lol).</p>
<p>We kept chatting on QQ; after I went back to campus we met up again.</p>
<p>She’d said a friend’s reading predicted she’d “definitely get a partner in February.” I leaned into the prophecy and confessed on the last day of February—we started dating.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867601527.png" alt="" />
<p>Like <em>The Tatami Galaxy</em> says: “Nothing is more trivial than ‘they lived happily ever after.’”</p>
<p>Life changed a lot compared with earlier years—from almost no rituals to planning for two, holidays, seeing things from more angles. Lots of homework: trust, growth, intimacy… From losing ~10 kg to gaining ~10 above my old weight—even in a relationship of two, you’re mostly facing yourself.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867602537.png" alt="" />
<p>Growth hit from several directions this year.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867603510.png" alt="" />
<p>English improved a bit—not enough. Japanese: studied a little at New Year’s, then stopped.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867604495.png" alt="" />
<p>Met site admin shrik3 in Beijing—ate and talked for two or three hours. Wonderful host—thanks for the gift!</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867606081.png" alt="" />
<p>In April I still thought like this; second half I dove into agents and consensus instead.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867607192.png" alt="" />
<p>So what should I do?</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867608179.png" alt="" />
<p>Not many shows this year; <em>GQuuuuuuX</em> was one we both liked. Summer I was buried in research—no cosplay. My partner changed jobs (no weekends off), so cons became rare.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867609122.png" alt="" />
<p>Equity volatility around April gave me a small win; from then to year-end the market mostly drifted up. I’m a value-investing “door disciple”—I’ll keep holding.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867610110.png" alt="" />
<p>Whenever I’m anxious I spam internship apps—same at year-end. Three small-company interviews in December. Last day of the year I did two back-to-back and finally saw my gaps clearly: still algorithms and “eight-part essays,” and projects need to match the role. Even if I’m nervous beforehand, facing interviewers I drop into flow—just answering, nothing else. Exhausting afterward.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867611159.png" alt="" />
<p>Roughly hit the plan above—paper work is mostly done; only experiments polish and writing left. Lighter load, so I can prep internships in parallel.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867612269.png" alt="" />
<p>Forgot to mention—we cosplayed together too. Once she was Rei and I was Rihito (<em>Look Back</em>); another time we went to the <em>EVA</em> pop-up at Chaoyang Joy City (I was Shinji, she was Asuka).</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867613303.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867614401.png" alt="" />
<p>Do I have that kind of talent?</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867615302.png" alt="" />
<p>Where is my dream?</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867616306.png" alt="" />
<p>I still want to ask: would Aaron Swartz be happy about today’s LLMs? Knowledge is easier to reach—but as Rob Pike warned, open-source code used for training can entrench monopolies. That can’t be what Aaron wanted.</p>
<p>End of May / start of June my advisor had dinner with me and two seniors. One is doing a PhD at NUS—super extroverted, we talked forever: travel plans, academic gossip… Incredible energy—said at his peak he could pull an all-nighter then run 1000 m under 3:30. Sharp memory for gossip details; I mentioned movies near campus and he rattled off two cinema names. Those two things stuck—and so did how far I feel from that energy and recall. He also went through a rough patch; what he shared, I listened to, but some pain only he can touch.</p>
<p>Soon after I started research with him on agents and consensus—busy until recently before things clarified. I won’t recap the work in a year-end post; hoping next year brings solid research output too.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867617196.png" alt="" />
<p>A silly moment with my partner, lol.</p>
<p>On “vibe coding” / AI-assisted dev: in 2024 Cursor autocomplete helped me finish Summer OSPP. After that I didn’t lean hard on AI coding—until May I still believed you should hand-write code or you miss the craft.</p>
<p>Doing a side project for my advisor, I tried Trae and watched it spit out tens of thousands of lines in one go. Right or wrong (there were bugs), the sheer scale seemed like a signal.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867618360.png" alt="" />
<p>By year-end, in general software nobody claims “pure hand coding” anymore—though for Linux kernel / consensus work AI still hits a wall.</p>
<p>What separates programmers from “vibe coders” is still depth. Great tools multiply a real engineer; you have to <em>be</em> an engineer first.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867619325.png" alt="" />
<p>Hama Sushi probably wins restaurant of the year—we went constantly. Sushi Lang once cost 300+ RMB; never again.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867620261.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867621202.png" alt="" />
<p>My childhood friend interned in Beijing in July—we had a month together before he went back to school in Australia. I probably can’t feel the loneliness he describes abroad; I couldn’t do much either, just answer when I could.</p>
<p>“Where talent lies”:</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867622183.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867623336.png" alt="" />
<p>Still anxious—but I’m pointed the right direction, so keep going.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867624285.png" alt="" />
<p>Read Hawstein’s essay; something started sprouting.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867625158.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867626033.png" alt="" />
<p>August I only went home a little over a week—few days in Weihai, sea air, local food.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867626973.png" alt="" />
<p>“You’d end up like this no matter which path you picked.”</p>
<p>In September my niece started kindergarten. During Qingming I went home for a cousin’s wedding and learned my sister was pregnant again—later she said likely a boy; he arrived safely end of December. Congrats to her and her husband.</p>
<p>Sept–Oct I grinded LeetCode, then research ate the time—yesterday’s interview problem stumped me, so back to drilling.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867627990.png" alt="" />
<p>The other side of vibe coding.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867629256.png" alt="" />
<p>My phone was dying so I upgraded Mom to a 17 Pro Max; I’m on her old OPPO. OPPO ships with Google services out of the box—very nice.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867630264.png" alt="" />
<p>Finished Hot 100 end of October…</p>
<p>December: thesis proposal defense—passed, slightly nerve-wracking. Next milestone Dec 2026 mid-term; if the paper is published by then the committee should be kind.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867631378.png" alt="" />
<p>Misc thoughts.</p>
<p>Late December my senior visited Beijing—we ate, talked for hours, understood him better, glimpsed how he moves through the world.</p>
<p>Big change this year: much of life is <em>with</em> my partner—Christmas, New Year’s with friends, facials together… Let’s keep walking next year.</p>
<p>For 2026 I want a publication and a solid internship—clear that hurdle first, then bigger dreams.</p>
<p>Keep watching and listening closely; don’t miss small details; keep thinking.</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[2025.12.28]]></title>
        <id>https://shemol.tech/2025-12-28-en</id>
        <link href="https://shemol.tech/2025-12-28-en"/>
        <updated>2025-12-28T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Weekly report for the last week of '25.]]></summary>
        <content type="html"><![CDATA[<h1>2025.12.28</h1>
<p>On Monday, I was searching for relevant papers on the work I’m currently doing. The more I searched, the more papers I found, and it occurred to me whether I could also publish a review paper along the way. I asked my senior about it, and he said it could be submitted to IJCAI. After organizing the papers, I sent them to him on Tuesday.</p>
<p>On Monday evening, I had dinner with three seniors at Tingyuan Jiangnan Cai in Zhongguancun. Listening to them talk about many academic matters.My senior is not only academically accomplished but also outgoing with high emotional intelligence, making him very adept at social interactions. We ended up chatting until after ten o'clock and then took the subway home together.</p>
<p>On Wednesday, I picked up front-end development again and submitted pull requests (PRs) for Dify and Cherry Studio. My plan is to start small by writing tests and fixing bugs before gradually delving deeper into these projects.</p>
<p>Thursday was Christmas Day. In the morning, my girlfriend and I played a non-horror escape room in Sanlitun (I don’t dare try mildly scary ones anymore—they're such a trap!). Then we went shopping around Guomao Mall where we took some photos together while enjoying desserts and coffee before heading over towards Wangfujing Central Plaza area which had festive decorations set up; there we ate Japanese cuisine later that night meeting up with high school friends visiting an anime goods store ("谷店"). After closing time at mall everyone went singing karaoke for two hours until dispersing around 1 AM – though honestly by then during KTV session energy levels were pretty low... typical low-energy person here!</p>
<p>Saturday noon involved video conferencing with one of those seniors who advised focusing solely coding rather than worrying about writing tasks since likely both survey paper or Infocom poster drafts would be handled by him instead anyway . That evening met university classmates catching each other's updates happily chatting away .</p>
<p>Friday through Saturday mostly consisted submitting PRs toward Dify/Cherry Studio projects as mentioned earlier .</p>
<p>Sunday today spent sprucing personal blog design quite satisfied haha !</p>
<p>Reflecting back this week feels incredibly long even though dining out happened just last Monday seems like ages ago now somehow …</p>
<a href="https://shemol.tech/">https://shemol.tech/</a>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[2025.12.21-en]]></title>
        <id>https://shemol.tech/2025-12-21-en</id>
        <link href="https://shemol.tech/2025-12-21-en"/>
        <updated>2025-12-21T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Weekly notes for this week.]]></summary>
        <content type="html"><![CDATA[<h1>2025.12.21</h1>
<p>I felt I needed to improve how I express myself, so I’m picking up the habit of weekly notes again…  </p>
<p>Also I often read articles once and move on without digesting them—maybe writing helps ideas settle.</p>
<p>Last weekend I went to AI Maker Summit. I saw Li Bojie post on the fediverse that he’d be speaking, checked the site—tickets were 300+ RMB, hesitated, then bit the bullet. At the venue there was no Bojie talk; next day on fedi it looked like he’d gone to the US. Friday he published a blog on his Silicon Valley AI observations—I read it two or three times; really interesting.  </p>
<a href="https://01.me/2025/12/silicon-valley-ai-insights-2025/">https://01.me/2025/12/silicon-valley-ai-insights-2025/</a>
<p>What helped me personally was the “AI coding best practices” bit: break tasks down and keep each AI-generated chunk under ~500 lines. Use different models to review code.</p>
<p>Across the summit, Dao Jie’s talk, one on post-training, and an investor’s stood out. I liked the investor’s line: “We’ve looked at <em>so many</em> projects…”—grounded.</p>
<p>Dao Jie mentioned the next session in the other hall—an agent memory system—and said if LLMs fully solved memory, that product category wouldn’t exist.</p>
<p>That stuck with me, together with Bojie’s post: as an indie or small team you need a clear niche so your product isn’t swallowed as models improve <em>or</em> cloned by giants. Even as engineers we should read AI papers and tech reports to track model progress.</p>
<p>A day or two after, I got pulled into the summit WeChat; next day a founder posted their Product Hunt launch.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867667048.png" alt="" />
<p>I already believe reading itself can’t be replaced by AI, so I checked Product Hunt—“read a book with famous people”—tried the app and it was genuinely fun!</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867668061.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867669345.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867670502.png" alt="" />
<p>Seeing Jobs and Munger’s lines reminded me of <em>Poor Charlie’s Almanack</em> and the Jobs bio—a way to revisit what I’d read and connect dots. I grabbed the 50% off on Discord and subscribed to the <strong>Annual</strong> Plan without hesitation—buy more, save more.</p>
<p>I’ve been reading Tony Dinh’s <em>My Indie Book</em> on Readever; toward the end it dragged a bit and I wanted to finish fast. Maybe tonight or tomorrow I’ll wrap it.</p>
<p>Just now another interesting example:</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867671927.png" alt="" />
<p>Jason Young spent huge effort and money adapting OpenRouter’s chat format for Claude Code—then OpenRouter shipped a compatible API. Tony Dinh describes a similar choice in his book.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867673193.png" alt="" />
<p>Mon–Wed this week I prepped my thesis proposal defense. Thought I was ready; a committee member said it felt like a PM pitching a product. A labmate said someone might have scored me in the 70s—not sure if I need a second round at the college; results next week…</p>
<p>I’ll save the year-end post for the 31st—something surprising could still happen; the year isn’t over until the last day.</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[bytedance-frontend-eg-camp-en]]></title>
        <id>https://shemol.tech/bytedance-frontend-eg-camp-en</id>
        <link href="https://shemol.tech/bytedance-frontend-eg-camp-en"/>
        <updated>2025-11-10T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[ByteDance frontend engineering camp written exam.]]></summary>
        <content type="html"><![CDATA[<h1>ByteDance Frontend Engineering Camp — Written Exam</h1>
<p>Multiple choice plus coding—jotting down the main topics.</p>
<h1>Multiple choice</h1>
<p>Mostly data structures & algorithms, computer networks, and HTML/CSS/JS basics.</p>
<h2>Data structures & algorithms</h2>
<p>Felt like a freshman/sophomore DS&A exam…</p>
<p>Several sorting questions—bubble, quicksort, etc.</p>
<p>One very easy complexity question.</p>
<p>Two or three binary tree traversal questions.</p>
<p>One stack question.</p>
<h2>Computer networks</h2>
<blockquote>Which transport-layer protocol is unreliable?</blockquote>
<p>UDP.</p>
<blockquote>A DNS message question</blockquote>
<p>I only remember option A: QR=0 means query, QR=1 means response.</p>
<h2>HTML, CSS, JS</h2>
<blockquote>How to normalize <code>margin</code>/<code>padding</code> across browsers?</blockquote>
<p>CSS reset:</p>
<p>``<code>css</p>
<p>* {</p>
<p>  margin: 0;</p>
<p>  padding: 0;</p>
<p>}</p>
</code>`<code>
<p>There’s also Normalize.css—I should study that.</p>
<blockquote>A CSS </code>float<code> question asking which usage is wrong</blockquote>
<p>Options included  </p>
<p>A. </code>float: **<code>  </p>
<p>B. </code>float: none<code>  </p>
<p>C. </code>float: left<code>  </p>
<p>D. </code>float: right<code>  </p>
<p>Pick A.</p>
<blockquote>How to fix parent height collapse after floats</blockquote>
<p>I forgot the exact options. Common fixes:</p>
<p>1. </code>::after<code> clearfix.  </p>
<p>2. Modern: </code>display: flow-root<code>.  </p>
<p>3. BFC via </code>overflow<code>—but </code>overflow: hidden<code> hides intentional overflow (dropdowns, shadows, tooltips).</p>
<blockquote>Which option is a BFC application?</blockquote>
<p>BFC (block formatting context)—applications:</p>
<p>1. <strong>Contain floats</strong> (classic): parent has floated children → collapse; give the <strong>parent</strong> </code>overflow: hidden<code> or </code>display: flow-root<code>.  </p>
<p>2. <strong>Stop vertical margin collapse</strong> between adjacent blocks: wrap in a new parent that establishes a BFC.  </p>
<p>3. <strong>Two/three-column layouts</strong>: e.g. left </code>float: left<code> (fixed width), right column BFC (</code>overflow: hidden<code> / </code>flow-root<code>) fills the rest.</p>
<p>Interview variants: “which property <strong>can</strong> establish a BFC?”</p>
<li></code>overflow: hidden<code> / </code>auto<code> / </code>scroll<code>  </li>
<li></code>display: flow-root<code>  </li>
<li></code>float: left<code> / </code>right<code>  </li>
<li></code>position: absolute<code> / </code>fixed<code>  </li>
<li></code>display: inline-block<code>  </li>
<li></code>display: table-cell<code>  </li>
<li>flex/grid items  </li>
<p>When you see </code>overflow: hidden<code> or </code>flow-root<code> fixing collapse, margins, or columns—that’s BFC.</p>
<blockquote></code>requestAnimationFrame<code> in JS</blockquote>
<p>Skimmed the red book—still fuzzy; revisit later.</p>
<blockquote></code>setTimeout<code> vs </code>Promise.then()<code> ordering</blockquote>
<p>JS schedules three kinds of work:</p>
<li><strong>Synchronous</strong>: runs on the call stack immediately.  </li>
<li><strong>Microtasks</strong>: after current sync code, before the next macrotask—</code>Promise.then()<code> / </code>.catch()<code> callbacks.  </li>
<li><strong>Macrotasks</strong>: after sync + all microtasks, one at a time—</code>setTimeout<code> / </code>setInterval<code> callbacks.</li>
</code>`<code>javascript
<p>console.log('1. sync: start');</p>
<p>setTimeout(() => {</p>
<p>  console.log('2. macro: setTimeout 1');</p>
<p>}, 0);</p>
<p>new Promise((resolve, reject) => {</p>
<p>  console.log('3. sync: Promise executor');</p>
<p>  setTimeout(() => {</p>
<p>    console.log('4. macro: setTimeout 2 (inside Promise)');</p>
<p>    resolve();</p>
<p>  }, 0);</p>
<p>}).then(() => {</p>
<p>  console.log('5. micro: Promise.then 1');</p>
<p>});</p>
<p>Promise.resolve().then(() => {</p>
<p>  console.log('6. micro: Promise.then 2');</p>
<p>});</p>
<p>console.log('7. sync: end');</p>
</code>`<code>
<p>1. Log </code>1<code>.  </p>
<p>2. Schedule macrotask </code>setTimeout 1<code>.  </p>
<p>3. Run Promise executor synchronously → log </code>3<code>.  </p>
<p>4. Schedule macrotask </code>setTimeout 2<code>.  </p>
<p>5. </code>Promise.resolve().then<code> → enqueue microtask.  </p>
<p>6. Log </code>7<code>.  </p>
<p>7. Drain microtasks → </code>6<code>.  </p>
<p>8. Run first macrotask → … and so on through </code>4<code>, microtasks, etc.</p>
<h1>Coding</h1>
<p>ACM-style I/O is still unfamiliar—need more NowCoder practice.</p>
<blockquote>Given equations with parameters A, B, C, count real solutions to  </blockquote>
<blockquote>X² + A²Y² + C = 0  </blockquote>
<blockquote>Y² + Z² + B = 0  </blockquote>
<blockquote>Z² + A = 0  </blockquote>
<p>Feels like pure math—solve and case-split.</p>
<blockquote>Among k-digit integers, how many have digit-sum m?  </blockquote>
<blockquote>e.g. k=2, m=3 → 12, 21, 30.</blockquote>
</code>`<code>python
<p>import functools</p>
<p>def solve_digit_sum(k:int,m:int)->int:</p>
<p>	@functools.lru_cache(None)</p>
<p>	def count_sequences(digits:int,target_sum:int)->int:</p>
<p>		if target_sum < 0:</p>
<p>			return 0</p>
<p>			</p>
<p>		if target_sum > 9*digits:</p>
<p>			return 0</p>
<p>			</p>
<p>		if digits == 0:</p>
<p>			return 1 if target_sum==0 else 0</p>
<p>			</p>
<p>		total_ways = 0</p>
<p>		for d in range(10):</p>
<p>			total_ways += count_sequences(digits - 1,target_sum - d)</p>
<p>		</p>
<p>		return total_ways</p>
<p>		</p>
<p>		</p>
<p>	if k<=0:</p>
<p>		return 0</p>
<p>		</p>
<p>	final_count = 0</p>
<p>	</p>
<p>	for d1 in range(1,10):</p>
<p>		final_count += count_sequences(k-1,m-d1)</p>
<p>		</p>
<p>	return final_count</p>
</code>``
<p>Another problem (paraphrased): define “cost” of a simple path as the <strong>maximum</strong> edge weight on that path. In a connected undirected weighted simple graph, count ordered pairs (u, v) whose <strong>minimum</strong> such cost equals k. Felt too hard for now—I’ll park it.</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Context Engineering for AI Agents with LangChain and Manus]]></title>
        <id>https://shemol.tech/Context-Engineering-for-AI-Agents-with-LangChain-and-Manus</id>
        <link href="https://shemol.tech/Context-Engineering-for-AI-Agents-with-LangChain-and-Manus"/>
        <updated>2025-10-20T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[learning notes of context engineering]]></summary>
        <content type="html"><![CDATA[<h1>Context Engineering for AI Agents with LangChain and Manus</h1>
<p>months ago Manus post a blog talking about Context Engineering.</p>
<a href="https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus">https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus</a>
<p>You don’t need all context to live in the messages history of your agent, so we need context offloading.</p>
<h1>Langchain experience</h1>
<h2>offload context to a file system</h2>
<p>So one of the most popular ideas here is just using a <strong>file system</strong>.</p>
<p>Take the output of a tool message as an example, dump it to the file system, send back to your agent just some minimal piece of information necessary so it can reference the full context if it needs to, but that full payload, for example, web search result that's very token-heavy, isn't spammed into your context window for perpetuity.</p>
<p>offloading context takes some piece of information, like a tool message that's token-heavy, and not sending it all back to your messages list, but dumping it to a file system where it can be retrieved only as needed.</p>
<h2>Reduce context</h2>
<p>Summarize or compress information to reduce context.Summarizing tool call outputs is one intuitive way to do this.So this idea of pruning old tool calls with tool outputs or tool messages is something that Claud is now kind of built into their their SDK.</p>
<p>Cognition(an agent application) also talks about idea of summarizing approving at agent-to-agent handoffs.</p>
<h2>Retrieve Context</h2>
<p>Claude Code Force only uses the file system and simple search tools, notably glob and grep. So there's different ways to retrieve context on demand for your agent.</p>
<p>Indexing and something like semantic search, file system and simple file search tools, both can be highly effective.</p>
<h2>Context isolation</h2>
<p>Context isolation is major, in particular splitting context across multi-agents.</p>
<p>Each sub-agent has its own context window and sub-agents allow for separation of concerns.</p>
<h2>Caching Context</h2>
<p>langchain open deep research </p>
<a href="https://github.com/langchain-ai/open_deep_research">https://github.com/langchain-ai/open_deep_research</a>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Context+Engineering+for+AI+Age_1770869899254.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Context+Engineering+for+AI+Age_1770869900279.png" alt="" />
<p>It has three phases: scoping of the research, the research phase itself using a multi-agent, basically architecture, and then a final one-shot writing phase. We use offloading, so we basically create a brief to scope our research plan.</p>
<p>We offload that so we don't just save that in the context window because that context window is going to get peppered with other things.</p>
<p>We offload it so it's saved independently, it can be accessed in our case from the line graph state, but it could also be from file system, it's the same idea.</p>
<p>So you create a research plan, you offload it, it's always accessible. You go do a bunch of work, you can pull that back in on demand so you can put it kind of at the end of your message list so it's accessible and readily available to your agent to perform, for example, the writing phase.</p>
<p>We use offloading, as you can see, to help steer the research and writing phases. We use reduction to summarize observation from token-heavy surf tool calls, that's done inside research itself.</p>
<p>And we use context isolation across sub-agents within research itself. And this is kind of a summary of a bunch of different, uh, of these various ideas across a bunch of different projects.</p>
<h1>Manus experience</h1>
<p>instead of building specialized models too early, uh, startups really should lean on general models and context engineering for as long as possible.</p>
<h2>Context Reduction: Compaction vs. Summarization</h2>
<p>For compaction, in Manus, every tool call and tool result we actually has two different formats: a full format and a compact one.</p>
<p>The compact version strips out any information that can be like reconstructed from the file system or external state. For for example, here, let's say you have a a tool that writes to a file and it probably has two fields, a path and a content field.</p>
<p>And but once the tool returns, you can ensure that the file already exists in the environment. So in the compact format, we can safely drop the super long content field and just keep the path.</p>
<p>And if your agent start is smart enough, well like whenever it needs to read that file again, it can simply retrieve it via the path. So no information is truly lost. It's just like externalized.</p>
<p>We think this kind of like reversibility is crucial because agents do like chain predictions based on previous actions and observations and you never know like which past action will suddenly become super important like 10 steps later.</p>
<p>You cannot predict it. So this is a a reversible reduction by using compaction.</p>
<p>Of course, like compaction only take you so far. Eventually like your context will will still grow and will hit the ceiling, and that's when we combine compaction with the more like traditional summarization, but we do it very carefully.</p>
<p>For example, here, before summarizing, we might offload key parts of the context into files. And sometimes like we even more do more aggressively, we can dump the entire pre-summary context as a text file or simply a log file into the file system so that we can like always recover it later.</p>
<p>And like Lance, like just mentioned some people just use like glob and grep. You know, glob also works for log files. So if the model is smart enough, it even knows how to retrieve those like summarized , those pre-summarized context.</p>
<p>the difference here is that <strong>compaction is reversible, but summarization isn't</strong>. Both reduce context lengths, but they behave very differently.</p>
<p>to make both methods coexist, we have to track some like context length thresholds. At the top like you'll have your models hard context limit, say like 1 million tokens, pretty common today.</p>
<p>But in reality most models start degrading much earlier, typically maybe around 200k, and you'll begin to see what we call a context rot, like repetitions, slower inferences, degraded quality.</p>
<p>So by doing a lot of evaluation, it's very important for you to identify that pre-rot threshold, it's typically 128K to to 200K, and use it as the trigger for context reduction.</p>
<p>And whenever like your context size approaches it, you have to trigger context reduction, but starting from compaction, not summarization.</p>
<p>And compaction doesn't mean like compressing the entire history. You know, we might compact like the oldest 50% of tool calls while keeping the newer ones in full detail so the model still has fresh view shot examples to of like how to use tools properly.</p>
<p>Otherwise like in the in the worst case, the model will will imitate the behavior and output those compact format with with missing fields and that's totally wrong.</p>
<p>And after compaction, we have to check how much free context that we actually gain from this like like compaction operation. Sometimes like in this graph, after multiple rounds of compaction, the gain is tiny because like even it's compact, it still uses context.</p>
<p>And that's when we go for summarization, but also keep in mind that when summarizing, we always use the full version of the data, not the compact one.</p>
<p>And we still like keep the last few tool calls and tool results in full detail, not summary, because it can allow the model to know where it left off and will continue like like more smoothly.</p>
<p>Otherwise, you'll see like after summarization sometimes the model will change its style, change its tone, and we find out like keeping a few few like tool call tool result examples really help.</p>
<h2>Context Isolation:Communicating vs. Sharing Memory</h2>
<p>Cognition's blog shows that they warn against using multi-agent setups because like when you have multiple agents syncing information between them becomes a nightmare.</p>
<strong>Multi-process or multi-thread coordination</strong> has been a classic challenge in the early days of computer programming, and I think we could borrow some wisdoms here.
<p>in the Go programming language community there's a famous quote from this gopher, "Do not communicate by sharing memory, instead share memory by communicating.”</p>
<a href="https://chatgpt.com/share/68f4f8c3-baac-8004-9cf7-421375260909">https://chatgpt.com/share/68f4f8c3-baac-8004-9cf7-421375260909</a>
<p>Of course, this isn't directly about agent and it's sometimes even wrong for for for agents, but I think the important thing is it highlights two distinct patterns here which is by communicating or by sharing memory.</p>
<p>Like if we translate the term memory here into context, we can see that parallel pretty clear. "By communicating" is like the easier one to understand because it is the classic sub-agent setup here.</p>
<p>For example, the main agent writes a prompt and it, the prompt is sent to a sub-agent, and the sub-agent's entire context only consists of that instruction.</p>
<p>We think if a task has a like short, clear instruction and only the final output matters, say like searching a codebase for a specific snippet, then just use the communication pattern and keep it simple.</p>
<p>Because the main agent doesn't care how the sub-agent find the code, it only needs the result.</p>
<p>And this is what Claud Code does, typically using its like task tool to delegate like a separated clear task to some sub-agents.</p>
<p>But for more complex scenarios, in contrast, "by sharing memory" means that the sub-agent can see the entire previous context. It means like all the tool use, tool use history, tool usage history, but it, the sub-agent has its own system prompt and its own action space.</p>
<p>For example, imagine a deep research scenario, the final report depends on a lot of intermediate searches and notes. And in that case, you should consider using the share memory pattern or in our language "by sharing context," because even you can save all that notes and and searches into file and making the the sub-agent to read everything again, but you're just wasting latency and context.</p>
<p>And if you count the amount of token, maybe you're using even more token to to do this. so we think for those scenario that requires a full history, just use a share memory pattern.</p>
<p>But be aware that sharing context is kind of expensive because each sub-agent has a larger input to prefill, which is like you'll spend more on like input tokens, and since the system prompt and the access space differs, you cannot re-reuse the KV cache, so you have to pay the full price.</p>
<h2>Context Offloading:Layered Action Space</h2>
<p>when people say offload, they usually mean like moving parts of the working context into external files.</p>
<p>But as system grows, especially if you decide to integrate MCP one day, you realize that the tools themselves can also take up a lot of context, and having too many tools in context leads to confusion.</p>
<p>We call it context confusion, and the model might call like the wrong ones or even like non-existing ones.</p>
<p>So we have to find a way to also offload the tools. A common approach right now is like doing dynamic RAG on tool descriptions,  for example like loading tools on demand based on the current task or the current status.</p>
<p>But that also causes two issues. First of all, like since tool definitions sit at the front of the context, your KV resets every time.</p>
<p>And most importantly, the model's past calls to remove tools are still in the context, so it might foot the model into like calling invalid tools or invalid or using invalid parameters.</p>
<p>So to address this, Manus is experimenting with a new layered action space. essentially, we can let Manus to choose from three different levels of abstractions: number one, function calling, number two, sandbox utilities, and number three, packages and API.</p>
<p>We go deeper into into these three layers of layer action space. Let's start from level one, function calling, and this is a classic, everyone knows it. It is schema safe thanks to constraint decoding, but we all know the downsides.</p>
<p>For example, we mentioned like breaking the cache and maybe too many tool calls will cause some confusion, too many tools may cause confusion.</p>
<p>So Manus uses a fixed number of atomic functions, for example like reading and writing files, executing shell commands, searching files in internet, and maybe some like browser browser operations.</p>
<p>these atomic functions have super clear boundaries, and they can work together to compose like much more complex workflows.</p>
<p>Then we offload everything else to the next layer, which is the sandbox utilities. As you know, each Manus session runs inside a full virtual machine sandbox. It's running on our own customized Linux system, and that means Manus can use the shell commands to run pre-install utility that we develop for Manus.</p>
<p>For example, we have some format converters, we have like speech recognition utilities, and even a very special, we call it MCP CLI which is how we call MCP.</p>
<p>We do not inject MCP tools to the function colony space. Instead, we do everything inside that sandbox through in the command line interface.</p>
<p>And utilities are great because you can add new capabilities without touching the model's model's calling space, and you know it's just some like commands pre-installed in your computer.</p>
<p>And if you're familiar with Linux, you always know how to find those new commands and you can even run like like d-help to to to to to to figure out how to use a new tool.</p>
<p>And another good thing is for larger outputs, they can just write to files or return the result in pages.</p>
<p>And you can use all those Linux tools like grep, cat, less, more, like to to to to to process that results on the fly. So the trade-off here like it's it's super good for large outputs, but it's also not that good for low latency back and forth interactions with the front end.</p>
<p>Because you always have to visualize the interactions of your agent and show it to the user.</p>
<p>And then we have another layer, the final layer, we call it packages and APIs. You know, here Manus can write Python scripts to call pre-authorized API or custom packages.</p>
<p>For example, Manus might use a 3D designing library for modeling or call a financial API to fetch market data. And here actually we've purchased all these API on behalf of a user and pay the money for them.</p>
<p>It's included in the subscription. So we basically we have a lot of like API keys pre-installed in Manus and Manus can access these APIs using the keys.</p>
<p>I think these are perfect for task that requires lots of computation in memory, but do not need to push all that data into the model context.</p>
<p>For example, imagine if you're analyzing a stock's entire year of price data, you don't feed the model all the numbers. Instead, you should let the script to compute it and only put the summary back into the context.</p>
<p>And you know, since code and APIs are super composable, you can actually chain a lot of things in one step.</p>
<p>For example, in a typical API, you can get city names, get city ID, get weather, all in one Python script.</p>
<p>There's also a paper like from one of my friend called Code Act. A lot of people were like discussing about it. I think it it's like the same idea because like code is composable and it can like like like do a lot of things in one step.</p>
<p>But also it's like it's not schema safe. It's very very hard to do like a strange decoding on codec.</p>
<p>So we think you should find the right uh scenario for these features. For us, we use all as we mentioned everything that's like that can handle inside a like like compiler or interpreter runtime, we do that using code.</p>
<p>Otherwise, we use like sandbox utilities or function calls.</p>
<p>And the good thing is if you have these three layers from model's point, all three levels still go through the standard function calls, so the interface stays simple, cache friendly, and orthogonal across functions.</p>
<p>Because you know, we mentioned sandbox utilities, you're still accessing these tools using the shell tool, accessing these tools using the shell function.</p>
<p>And also like if you're using APIs in third-party applications, you're just using the file function to write or read file and then execute it, execute it using the shell function.</p>
<p>So you think it does not add like like add overhead to the model. It's still all the things that models are trained and they're already familiar with.</p>
<h2>Connecting the Five Dimensions and Avoiding Over-engineering</h2>
<p>Let's zoom out and connect the five dimensions: offload, reduce, retrieve, isolate, and cache. You can find find out that they are not independent.</p>
<strong>We can see that offload and retrieve enables more efficient reduction and stable retrieve makes isolation safe, but isolation, oh yeah, isolation also slows down contacts and reduces the frequency of reduction.</strong>
<p>However, more isolation and reduction also affects cache efficiency and the quality of output. So at the end of the day, I think context engineering is the science in art that requires a perfect balance between multiple potentially conflicting objectives.</p>
<p>I want to leave with maybe one final thought, and it's kind of the opposite of everything I just said, which is <strong>please avoid context over-engineering</strong>.</p>
<p>looking back at the at the like past six or seven months since Manis launch, actually the biggest leap we've ever seen didn't came from like adding more fancy context management layers or clever retrieval hacks.</p>
<p>They all came from simplifying or from removing unnecessary tricks and trusting the model a little more.</p>
<p>Every time we simplify the architecture, the system got faster, more stable, and smarter, because we think context engineering should uh, the goal of context engineering is to make the model's job simpler, but not harder.</p>
<p>So if you like take one thing from today, I think it should be <strong>build less and understand more</strong>.</p>
<h1>Q&A</h1>
<h2>Q&A - Shell Tools and Sandboxing</h2>
<p>Q: how does the LLM call the various shell tools? How does it know which tools exist and how to invoke them?</p>
<p>Maybe you can explain a little bit about kind of the multi the multi-tier kind of sandboxing setup that you use with Manus.</p>
<p>A: First of all, we have a hint in the system prompt telling Manas that, hey, there's a lot of pre-installed command line utilities located in some specific folder.</p>
<p>And also like for the most like frequently used ones, we already like injected in the system prompt, but it's super compact. We do not like tell the the agent how to use the tools.</p>
<p>We only list them and we can tell the agent that you can use the the the <code>--help</code> uh flag safely because all the utilities are developed by our team and they have the same format.</p>
<h2>Q&A - Indexing vs. File System for Context Retrieval</h2>
<p>Q: I know you talked a lot about using file system. What's your take on using indexing, um, and do you utilize like do you spin up vector stores on the fly if the context you're working with gets sufficiently large?</p>
<p>A:there's no no right and wrong in this space, like you've mentioned, uh, but at Manis we do not use index databases because right now, you know, every sandbox in Mana session is a new one and user want to like interact with things fast, so actually we don't have the time to like build the index on the fly.</p>
<p>So we're more like Claude Code, we rely on like like grep and and and glob. But I think like if you like consider to build some something like more long-term memory or like if you want to integrate some like like enterprise knowledge base, you still have to rely on that like um like external vector index because like it's only about the the amount of information that you can access.</p>
<p>But for like Manus like it operates in a sandbox and for coding agent, you operate in the codebase, so it it depends on the scale.</p>
<p>Q:So let's say I'm a user, I have my Manus account, I interact with Manas across many sessions. Do you have the notion of memory?</p>
<p>So Claude has Claude MD files, they persist across all the different sessions of Claude Code. How about you guys? How do you handle kind of long-term memory?</p>
<p>A:actually in Manus we have a concept called knowledge, which is kind of like like explicit memory.</p>
<p>For example, like every time you can tell Manas, hey, remember like uh every time I ask for something, deliver is in maybe in Excel, and it's not automatically inserted into some memory.</p>
<p>It will pop up a a dialogue and say, here's what I learned from our previous conversation and would you like accept it or reject it? So this is the explicit one, it requires user confirmation.</p>
<p>but also like we are discovering new ways to do it more automatically. For example, like um uh, a pretty interesting uh thing in agents is that like compared to chatbots, user often like correct correct the agent more oftenly.</p>
<p>For example, like a common uh mistake that Manas make is when doing like data visualization, you know, if you're using Chinese, Japanese, or Korean, a lot of time there will be some font issues and there will be errors in those render render visualizations.</p>
<p>So the user will often say like, hey, you should like use use like not and CJK font. And for these kind of things, the user will will a different user will will have the same correction and we need to maybe they'll find out a way to like to leverage these kind of a collective feedback and use it.</p>
<p>That's kind of like we call it self-improving agent with online learning, but in a parameter free way.</p>
<h2>Q&A - Adapting to Evolving Models</h2>
<p>Q: You mentioned towards the end of your talk that, um, you you gained a lot from removing things, and a lot of that is probably because of the fact that also the models are getting better.</p>
<p>So model capabilities in increasing and so you can kind of remove scaffolding over time. How do you think about this?</p>
<p>Because this is one of the biggest challenges that I've faced is like over time the model gets better, and I can remove things like certain parts of my scaffolding, so you're building on top of this, the the foundation that's like the water's rising.</p>
<p>do you revisit your architecture every some number of months with new releases and just delete as the models get better, and how do you how do you approach that problem?</p>
<p>A: this is a super good good question here because you know, actually we have already um refactored Manis for five times, and we've launched Manis in March and now it's October already, five times.</p>
<p>So we think like you cannot stop because like models are not only improving, but they are changing. Models' behaviors are changing over time.</p>
<p>one way is you can you can work closely with those like model providers, but we also have another like internal theory for how we evaluate or how we design our agent architecture.</p>
<p>I cover a little bit on Twitter before. It's basically like we all, we do not care about a the the a static the performance of a static uh benchmark.</p>
<p>Instead, we like we fix the AR agent architecture and we switch between models.</p>
<p>If if like your architecture can gain a lot from switching from a weaker model to a stronger model, then somehow your your architecture is more future-proof because like the the the the weaker model tomorrow is might be as good as a stronger model today.</p>
<p>so we think like switching like between uh weaker and strong models can give you some like early signals of what will happen next year and give you some time to prepare your architecture.</p>
<p>for Manus, um, we often like do these kind of review like every every one or two months, and we often like um, do some like um, yeah, do some like like research internally using like open source models and maybe like early access to prep proprietary models to like prepare the the the next release like even before the launch of the next model.</p>
<h2>Q&A - Data Storage Formats</h2>
<p>What about um best practices or considerations for um format for storing data?</p>
<p>So like markdown files, plain text, log, uh, anything you prefer in particular?</p>
<p>I think obviously it's, yeah, how do you think about that kind of file formats for?</p>
<p>A: I think like like it's the not about like plain text or markdown, but we always like prioritize line based um formats because like it allows like the models to use like grep or like read from read from a range of range of lines.</p>
<p>And also like markdown can sometime cause some troubles. You know, um, models are trained train trained to use markdown really well, and sometimes it will maybe for for for for some model, I don't I don't want to say that name, but but they often like output too many bullet points if you use markdown too too often.</p>
<p>Yeah, so actually we we want to use more plain text.</p>
<h2>Q&A - Prompting for Summarization</h2>
<p>How about on the topic of um compaction versus summarization?</p>
<p>Let's hit on summarization. This is an interesting one that I've been asked a lot before, uh, how do you prompt to produce good summaries?</p>
<p>So for example, summarization like you said, it's irreversible, so if you don't prompt it properly, you can actually lose information.</p>
<p>The best answer I came up with is just tuning your prompt <strong>for high recall,</strong> but how do you approach this?</p>
<p>how do you think about prompting for summarization?</p>
<p>A:we tried a lot of a lot like optimizing the prompt for summarization, but it turns out a simple approach works really well is that you do not use a free form like prompt to let the AI generate everything.</p>
<p>Instead, you could define a kind of a schema. It's just a form, there's a lot of fields and let the AI to fill them.</p>
<p>For example, like here are the files that I that I've modified and here's the goal of the user, here's what I left off.</p>
<p>And if you use this kind of like a more structured schema, at least like like the output is kind of stable and you can iterate on this, so just do not use like free form summarizations.</p>
<h2>Q&A - Compaction of Search Results</h2>
<p>How about with context, how about with compaction then?</p>
<p>And actually, I want to make sure I understood that. So with compaction, let's say it's a like a search tool, you have the raw search tool output and would it be that would be your raw message and then the compaction would just be like uh a file name or something, is that right?</p>
<p>A:Yeah, it is. It's not only about like the tool call, it's also like applied to the to the result of the tool.</p>
<p>we interestingly we find out that almost every every action in Manas is just kind of like reversible if you can offload it to a to the file system or an external state.</p>
<p>for most of these tasks, you already have a unique identifier for it. For example, for file operations, of course, you have the file path.</p>
<p>For like browser operations, you have the URL, and even for search search um actions, you have the query.</p>
<p>So it's it's naturally it's already there.</p>
<p>Lance: And just want to hit that again because it I've had this problem a lot. So for example, I'm an agent that uses search, I perform a, it returns a token-heavy tool call.</p>
<p>I don't want to return that whole tool message to um the agent.</p>
<p>I've done things like some kind of summarization or compaction and send the summary back, but how do you approach that because you might want all that information to be accessible for the agent for his next decision, but you don't want that huge context block to live inside your message history?</p>
<p>So how do you approach that? You could send the whole message back uh, but then remove it later, that's what Claude does now.</p>
<p>You could do a summarization first and send the summary over. Um, you could do you could send everything and then do compaction so that later on you don't have the whole context in your message history.</p>
<p>You only have like a link to the file. How do you think about that specifically if you see what I'm saying?</p>
<p>A: I know actually it depends on the scenario. For for example, like for like complex search, I mean for complex search, I mean it's not just one query.</p>
<p>For example, like you have multiple queries and you want to like like gather some important things and drop everything else.</p>
<p>in this case, I think we should use sub-agents or internally we call it agent as tool. So for the from the model's perspective, it's still a kind of function, maybe called advanced search.</p>
<p>It's a function called event search, but what it triggers is actually another sub-agent, but that sub-agent is more like a workflow or agentic workflow that has a fixed output schema, and that is the result that returns to the agent.</p>
<p>But for like other kinds of more simpler search, for example, just like searching Google, like we just use the full detail format and like append it into the context and rely on the compactions thing.</p>
<p>But also like we always like instruct the model to like write down like the intermediate insights or key findings into files in case that like the compaction happens earlier than than the model expected.</p>
<p>And if you like do this really well, actually you don't lose a lot of information um by compaction because sometimes like those like old tool calls are irrelevant after time.</p>
<h2>Q&A - Agent-to-Agent Communication & MapReduce</h2>
<p>Q:I like the idea of agent as tool, we do that quite a bit and that does make that that is that is highly effective, but that brings up another interesting point about, and and you referenced this a little bit, agent agent communication.</p>
<p>How do you address that?</p>
<p>So Walden Yen from from Cognition had a very nice blog post talking about this is like a major problem that they have with Devin.</p>
<p>so like kind of communication between agents, how do you think about that problem and yeah, ensuring sufficient information is transferred but not overloading like you said the prefill of the sub-agent with too much context?</p>
<p>A: we've launched a feature called Wide Research a month ago, like it's basically like we call, yeah, internally we call it agentic map reduce because we we got inspired from the design of MapReduce.</p>
<p>And it's kind of special for Manus because, there's a full virtual machine behind the session, so one way we pass like information or pass context from the main agent to sub-agent is by sharing the same sandbox.</p>
<p>So the file system is there and you can only pass like the like different path here.</p>
<p>And I think like like sending information to sub-agent is not that hard. The the more more complex thing is about how to like like have the the correct output from different agents.</p>
<p>And what we did here is like we have a trick for every every time if the main agent want to spawn up a new sub-agent or or maybe 10 sub-agents, you have to design, you have to let the main agent to to define the output schema.</p>
<p>And in the in the sub-agent perspective, you have a special tool called <code>submit_result</code>, and we use constraint decoding to ensure that what the the sub-agent submits back to to the main agent is the schema that is defined by the main agent.</p>
<p>Yeah, so you can imagine that this kind of MapReduce operation, it will generate a kind of like spreadsheet and the spreadsheet is constrained by the schema.</p>
<p>Lance:That's an interesting theme that seems to come up a lot with how you design Manus, you use schemas and structured outputs both for summarization and for this agent agent communication.</p>
<p>So it's kind of like use schemas as contracts between agent sub-agent or between like a tool and your agent to ensure that sufficient information is passed in a structured way in a complete way uh, like when you're doing summarization, you use a schema as well.</p>
<h2>Q&A - Model Choice and Open Models</h2>
<p>I'm poking around some other interesting questions here. Uh, any thoughts on models like uh, I think you guys are use Anthropic, but do you work with open models?</p>
<p>do you do fine-tuning? You talked a lot about kind of working with KV cache, so for that, maybe using open models.</p>
<p>How do you think about like model choice?</p>
<p>A: actually right now we don't use any like open source model right now because I think it's not about quality, it's interestingly it's about cost.</p>
<p>we often think that open source model can lower the cost, but if you're at the scale of Manis and and if you're building a real agent, which the input is way longer than the output, then KV cache is super important.</p>
<p>And distributed KV cache is very hard to implement if you use like open source solutions.</p>
<p>if you use like those like um frontier pro uh LLM providers, they have more solid infrastructure for like distributed cache uh globally.</p>
<p>So sometimes like if you do the math, uh at least for Manis, we find out that using like like these flagship models can sometimes can they can be even more cheaper than like using open source models.</p>
<p>And right now, we're not only using Anthropic force.</p>
<p>Like Anthropic's model is the best choice for like agentic task, but we're also like seeing like the progress in Gemini and in Open New model.</p>
<p>I think right now like these like frontier labs are not converging in directions. For example, like if you're doing coding, of course, you should use uh Claude.</p>
<p>And if you uh want to do like more multimodal multimodality things, you should use Gemini.</p>
<p>And open model is super good at like like complex math and reasoning. So I think for application companies like us, one of our advantage is that we do not have to build on top of only one model.</p>
<p>You can do some like task level routing or maybe even subtask or step level routing if you can like like calculate like if you can can pull in that kind of KV hash validation.</p>
<p>So I think it's advantage for us and we do a lot of evaluations internally to know which models to use for which subtask.</p>
<p>Lance:with KV cache, so what specific features from the or, yeah, what from the providers are you using for cache management?</p>
<p>I know like Anthropic has input caching as an example.</p>
<h2>Q&A - Tool Selection and Layered Action Space (Revisited)</h2>
<p>Q:tool selection is a good one. Um, right, so you were talking about this, you don't use like uh, indexing of tool descriptions and fetching tools on the fly based on semantic similarity.</p>
<p>How do you handle that? Like what's what's the threshold for too many tools?</p>
<p>tool choice is a classic. How do you think about that?</p>
<p>A: first of all, it depends on the model. Different model has different capacity for like tools, but I think a rule of thumb is try not to like um include more than like 30 tools.</p>
<p>It's just a random number in my mind, but actually I think like if you're building a we call it a general AI agent like Manis, you want to make sure those like native functions are super atomic.</p>
<p>So actually there are not that much like atomic function that we need to put inside the action space.</p>
<p>So like for Manus, we right now we only have like like 10 or 20 like atomic function, and everything else is in the sandbox.</p>
<p>we don't have to like um to pull things like dynamically.</p>
<p>Lance:Let's explain that a little bit more, so so you have, let's say, 10 tools that can be called directly um by the agent, but then I guess it's like you said the agent can also choose to for example write a script and then execute a script.</p>
<p>So that expands its action space hugely without giving it like you don't have an independent tool for each possible script, of course, that's insane.</p>
<p>So so our very general tool to like write a script and then run it does a lot.</p>
<p>A:why we are super confident to call Manis a general agent?</p>
<p>Because it runs on a computer, and computers are Turing complete. The computer is the best invention of human.</p>
<p>Like theoretically, like an agent can do anything that an maybe a junior intern can do using a computer.</p>
<p>So with the shell tool and the and the text editor, we think it's already complete, so you can offload a lot of things right to sandbox.</p>
<p>Lance:You mentioned code with code agents. My understanding is the model will actually always produce a script and that'll then be run inside a code sandbox for so every tool call is effectively like a script is generated and run.</p>
<p>It sounds like you do some hybrid where sometimes Manas can just call tools directly, but other times it can actually choose to do something in the sandbox, is that right?</p>
<p>A:I think this is this is super important because like actually we try to use entirely to use uh Codec for Manas, but the problem is if you're using code, you cannot leverage like constraint decoding and things can go wrong.</p>
<p>Codec has some like special use cases as I mentioned earlier in slides, for example, like processing a a large amount of data.</p>
<p>You don't have to like port everything in the tool result.</p>
<p>It's that you put it inside like maybe the runtime memory of Python and you only get the result back to to the model.</p>
<p>So we think you should do it like in a hybrid way.</p>
<h2>Q&A - Planning and To-Do Lists</h2>
<p>Q: Tell me about planning and and I know Manus has this to-do tool or it generates a to-do list and start of tasks.</p>
<p>A:at the beginning Manus uses that <code>to-do.md</code> paradigm.</p>
<p>it's kind of, I I don't want to use the word stupid, but actually it wastes a lot of turn.</p>
<p>You know, um, like back in maybe March or April, like if you like check the log of some Manas task, maybe like one-third of the action is about like updating the the to-do list.</p>
<p>It wastes a lot of like like tokens.</p>
<p>so right now we're using a more like structuralized planning. For example, like uh, if you use Manus, there's a planner at the bottom of like the system.</p>
<p>Internally, it's also kind of a tool called it's, we implemented using the agent as tool paradigm so that like there's a separate agent that that is managing the plan.</p>
<p>So actually right now the latest version of Manus, we are no longer using that <code>to-do.md</code> thing. </p>
<p>like <code>todo.md</code> still works and it can generate like good results, but if you want to say save tokens, you can find another way.</p>
<h2>Q&A - Multi-Agent Design and Roles</h2>
<p>So you might have like a planning agent with its own context window, makes a plan, produces like some kind of plan object, maybe it's a file or maybe it just calls sub-agents directly.</p>
<p>How do you think about that like and how many different sub-agents do you typically recommend using?</p>
<p>A: I think this is also like depends on your design, but here at Manis actually Manis is not kind of like the typical multi-agent system.</p>
<p>For example, like we've seen a lot of like different agent that divides by role.</p>
<p>For example, like you have a designer agent or design or like programming agent, manager agent, we don't do that because we think like uh why we have this is because this is how like human company works and this is due to the limitation of like human context.</p>
<p>So in Manus, Manas is a multi-agent system, but we do not divide by role.</p>
<p>We only have very few agents. For example, we have a huge like general executor agent and a planner agent and a knowledge management agent and maybe like some some, yeah, data API registration agent.</p>
<p>so we are very very cautious about adding more sub-agents because of the reason that we've mentioned before, communication is very hard.</p>
<p>And we implement more kinds of like sub-agents as agent as tools as we mentioned before.</p>
<p>Lance:I see this mistake a lot, or I don't know if it's a mistake, but you see anthropomorph, anthropomorphizing agents a lot like it's my designer agent, and I think it's kind of a forced analogy to think about like a human org chart in your sub-agents.</p>
<p>it's like a planner and knowledge manager. A knowledge manager might do what like um, like what will be the task of knowledge manager like?</p>
<p>A:we mentioned like we have a knowledge system in Manus.</p>
<p>What the knowledge agent does is that it reviews like the conversation between the user and the agent and and figure out like what should be like saved in in the long-term memory.</p>
<h2>Q&A - Safety and Guardrailing in Sandboxed Environments</h2>
<p>How about guardrailing? Someone asked a question about kind of safety and guardrailing.</p>
<p>A: if you have a sandbox that's connected to the internet, everything is dangerous.so we have put a lot of effort like in guard railing, like at least we do not let the information to get out of the sandbox.</p>
<p>For example, like if you like got prompt injected, uh, we have some like uh checks on like outgoing traffic.</p>
<p>For example, like we'll ensure that no like token things will go out of the sandbox.</p>
<p>And if the the user wants to like print something out of the sandbox, we have those kind of like like like um what we call it uh removing, yeah, removing things and to to ensure that no information go out of the sandbox.</p>
<p>But you know, um, for another kind of thing is that we have a browser inside of Manus, and the browser is very complicated.</p>
<p>For example, like if you log into some like um your websites, you can choose to let Manis to persist your login state, and this turns out to be like like very tricky because like sometime the content of the web page can also be like malicious, maybe they they're doing like like prompt injection.</p>
<p>And this, I think, is somehow like out of scope for application company. So we're moving uh, we're working very closely with those computer use model provider.</p>
<p>For example, like Anthropic and Google. Yeah, they're adding a lot of guardrails here.</p>
<p>So right now in Manas, every time you do some like sensitive operations whether or inside the um the browser or in the sandbox, Manas will will require a manual confirmation and you must accept it or otherwise you have to take over it to finish it yourself.</p>
<p>So I think like it's pretty hard for us to like design a a like kind of a very like well-designed solution, but it's a progressive approach.</p>
<p>So right now we're letting the user to take over more frequently, but like if the guard rail itself in the model gets better, we can do less.</p>
<h2>Q&A - Evaluation Strategies</h2>
<p>How about the topic of evals? This has been discussed a lot quite a bit online if you probably seen, you know, Claude Code, they talked a lot about just doing less formal evals at least for code because code evals are more or less saturated, lots of internal dog fooding.</p>
<p>How do you think about evals? Are they useful? What evals are actually useful?</p>
<p>What's your approach?</p>
<p>A:Yes, yeah, you know, at the beginning uh, at the launch of Manis, we're using like public academic benchmarks like Gaia, but then like after after launching to the public, we find out that it's super misaligned.</p>
<p>models are that that gets like high scores on Gaia, the user don't like it.</p>
<p>So right now we use like three, we have three different kinds of evaluations.</p>
<p>First of all, most importantly is that for every like completed session in Manas, we'll request the user to like give a feedback to give one to five stars.</p>
<p>this is the gold standard. Like we always care about like the average user rating. This is number one.</p>
<p>And number two, we're still using some like like internal automated tests with like verifiable results.</p>
<p>For example, like we have like created our own data set with like clear answers. But also like uh we, yeah, we we still use a lot of like public academic benchmarks, but we also uh created some um some data sets that's more focused on execution because like most benchmark out there are more about like read-only tasks.</p>
<p>So we designed some like like um like executing tasks or transactional task because we have the sandbox, we can like frequently reset the test environment.</p>
<p>So these are the automated parts. And most importantly, number number three, we have a lot of interns, you know, you have to use a lot of real human interns to do like like uh evaluations on things like website generation or data visualization because like it's very hard to design a good reward model that knows whether the output is visually appealing.</p>
<strong>it's about the taste.</strong>
<h2>Q&A - RL with Verifiable Rewards vs. Tool Calling Agents</h2>
<p>I do want to ask you about this emerging trend of of reinforcement learning with verifiable rewards versus just building tool calling agents.</p>
<p>So like Claude Code, extremely good, and they have the benefit because they built the harness and they can perform RL on their harness and it can get really really good with the tools they provide in the harness.</p>
<p>Do you guys do RL um, or how do you think about that?</p>
<p>Because of course, in that case, you would have you using open models.</p>
<p>I've been playing with this quite a bit lately. How do you think about that, just like using tool calling out of the box with model providers versus doing RL yourself inside your environment with your with your with your harness?</p>
<p>A:I've been doing like free training, post training, RL for a lot of years, but I have to say that right now if you like if you have like in um like sufficient resource, you can try.</p>
<p>But actually like we, as I mentioned earlier, MCP is a big changer here because like if you want to support MCP, you're not using a fixed action space.</p>
<p>And if it's not a fixed action space, it's very very hard to design a good like reward, and you cannot generate a lot of like the the rollouts and feedbacks will be unbalanced.</p>
<p>So if you want to build a model using like that supports MCP, you are literally building a foundation model by yourself.</p>
<p>So I think like every everyone in the in the community like model companies they're doing the same thing.</p>
<p>They're doing the same thing for you. So right now I don't think we should spend that much time on doing RL right now, but like as I mentioned earlier, we are just discovering like like exploring new ways to do like maybe call it like personalization or some sort of online learning, but using like parameter freeway.</p>
<p>For example, like collective feedbacks.</p>
<p>Lance: one little one along those lines is is it the case that for example Anthropics done reinforcement learning at verified rewards on some set of tools using Claude Code.</p>
<p>Have you found that you can kind of mock your your your harness to use similar tool names to kind of unlock the same capability if that makes sense?</p>
<p>Like um, for example, like I believe they've just, you know, they've obviously performed, you know, they it utilized glob, uses GP, uses some other set of tools for manipulating the file system.</p>
<p>Can you effectively reproduce that same functionality by having the exact same tools with the same tool name, same descriptions in your harness or kind of how do you think about that like unlocking um, unlocking the, yeah, right, you see what I'm saying?</p>
<p>A:I know the clear answer here, but for us we actually try not to use the same name because like it it will like if you design your own function, you maybe have like different requirements for that function, and the parameters, the input arguments might be different.</p>
<p>So you don't want to like confuse the model like if the model is trained on a lot of like post training data that has some like internal tools, you don't want to to to let the models to be confused.</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Two dark clouds over Agent_ real-time interaction with the environment and learning from experience]]></title>
        <id>https://shemol.tech/two-dark-clouds-over-agent</id>
        <link href="https://shemol.tech/two-dark-clouds-over-agent"/>
        <updated>2025-10-19T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Notes on two open problems for agents: real-time environment interaction and learning from experience.]]></summary>
        <content type="html"><![CDATA[<h1>Two dark clouds over Agent: real-time interaction with the environment and learning from experience</h1>
<a href="https://01.me/files/agent-learn-from-experience/dist/1">https://01.me/files/agent-learn-from-experience/dist/1</a>
<p>Co-Founder & Chief Scientist, Pine AI</p>
<p>The Challenge of real-time interaction</p>
<li>high latency in voice interaction (tens of seconds)</li>
<li>GUI operation is 3-5 times slower than human actions</li>
<li>the serial bottleneck of the traditional ReAct loop</li>
<p>technical breakthrough</p>
<li>SEAL architecture（Streaming, Event-driven Agent Loop）</li>
<p>  - perception layer: Streaming processing of speech signals</p>
<p>  - thinking layer: Interactive ReAct with asynchronous observation, thinking, and action</p>
<p>  - execution layer: feedback Loop VLA/TTS</p>
<p>The challenge of learning from experience</p>
<p>core challenge</p>
<li>Every task starts from scratch</li>
<li> Unable to accumulate domain knowledge</li>
<li>lack of proficiency improvement</li>
<p>three major paradigns of agent learning from experience </p>
<p>1. Post-training：RL parameter update</p>
<p>2. In-context Learning：Attention soft update</p>
<p>3. Externalized Learning：</p>
<p>  - RAG: persistent Experience Storage</p>
<p>  - Tool Generation: Agent Self-Evolution</p>
<p>Scientist Shunyu Yao pointed out the first issue: the lack of interaction with real people during an agent’s task execution, and the second issue: the absence of a mechanism for learning from experience.</p>
<p>(So I went to read that blog)</p>
<h2><strong>The Second Half - Shunyu Yao</strong></h2>
<a href="https://ysymyth.github.io/The-Second-Half/">https://ysymyth.github.io/The-Second-Half/</a>
<p>In the first half, we continuously developed new training methods and models, achieving consistent results in benchmark tests. We kept creating more challenging benchmarks and consistently scored high on these tests, cycling through this process repeatedly. Ultimately, we found an effective method capable of achieving generalization: reinforcement learning.</p>
<p>This recipe has been largely standardized and requires little new thinking; as long as the above cycle is continuously followed, performance can keep improving. Therefore, a fundamental rethinking of the evaluation method is necessary.</p>
<p>The issue is that despite using AI to defeat world champions in chess and Go, surpass most humans on the SAT and bar exams, and achieve gold-medal levels in competitions, the world hasn't changed much—at least from an economic or GDP perspective.</p>
<p>The author refers to it as the utility problem.</p>
<p>Previous evaluation settings differ from the real-world setup in many ways. Two examples:</p>
<li>Evaluations should run automatically. Typically, an agent receives a task and acts autonomously, subsequently earning a task reward. However, in reality, the agent must continuously interact with humans throughout the entire task process—you can’t just send an extremely long message to customer support, wait ten minutes, and expect to receive a detailed reply that solves all your problems.</li>
<li>The evaluation "should" follow the independent and identically distributed (i.i.d.) principle. If the test set contains 500 tasks, each task must be executed independently, and the overall evaluation result is derived by aggregating task metrics. However, in reality, task processing tends to be sequential rather than parallel. As Google engineers become more familiar with the codebase, their ability to handle Google3 issues continuously improves; meanwhile, software engineering agents—even when addressing numerous problems within the same codebase—fail to achieve such incremental progress. We clearly need long-term memory mechanisms (<a href="https://yitaoliu17.com/assets/pdf/ICLR_2025_CER.pdf">existing methods</a> already enable this), but academia lacks both suitable benchmarks to validate its necessity and the academic courage to question the foundational assumption of machine learning: the i.i.d. hypothesis.</li>
<p>In the first half of artificial intelligence development, these assumptions established benchmarks without issue, as enhancing intelligence typically increased utility when AI capabilities were relatively low. However, universal methodologies now ensure effectiveness under these assumptions. Thus, the key to navigating the new landscape of the second half lies in:</p>
<li>Develop novel evaluation settings or tasks for practical applications.</li>
<li>Solve problems according to the established plan, or refine the solution by introducing innovative elements. Repeat this cycle.</li>
<p>While the first half of the game is filled with incremental approaches and models, the second half will, to some extent, filter them out. Unless new premises that break conventions can be established, universal solutions will completely overshadow those gradual methods—only then will there be an opportunity to pursue truly disruptive research.</p>
<p>and I came across an expression that struck me as incredibly clever. I absolutely adore the following passage:</p>
<blockquote>Thinking, or reasoning, is a <strong>strange</strong> kind of action - </blockquote>
<blockquote>it does not directly affect the external world, yet the space of </blockquote>
<blockquote>reasoning is open-ended and combintocially infinite — you can think </blockquote>
<blockquote>about a word, a sentence, a whole passage, or 10000 random English </blockquote>
<blockquote>words, but the world around you doesn’t immediate change. In the </blockquote>
<blockquote>classical RL theory, it is a terrible deal and makes decision-making </blockquote>
<blockquote>impossible. Imagine you need to choose one out of two boxes, and there’s</blockquote>
<blockquote> only one box with $1M and the other one empty. You’re expected to earn </blockquote>
<blockquote>$500k. Now imagine I add infinite empty boxes. You’re expected to earn </blockquote>
<blockquote>nothing. But by adding reasoning into the action space of any RL </blockquote>
<blockquote>environment, we make use of the language pre-training priors to </blockquote>
<blockquote>generalize, and we afford to have flexible test-time compute for </blockquote>
<blockquote>different decisions. It is a really <strong>magical</strong> thing and I</blockquote>
<blockquote> apologize for not fully making sense of it here, I might need to write </blockquote>
<blockquote>another blog post just for it. You’re welcome to read <a href="https://arxiv.org/abs/2210.03629">ReAct</a></blockquote>
<blockquote> for the original story of reasoning for agents and read my vibes at the</blockquote>
<blockquote> time. For now, my intuitive explanation is: even though you add </blockquote>
<blockquote>infinite empty boxes, you have seen them throughout your life in all </blockquote>
<blockquote>kinds of games, and choosing these boxes prepare you to better choose </blockquote>
<blockquote>the box with money for any given game. My abstract explanation would be:</blockquote>
<blockquote> <strong>language generalizes through reasoning in agents</strong>.</blockquote>
<h1>Section 1: Agent interaction with environment in real-time</h1>
<h2>Real-time interaction challenges of voice agents</h2>
<h3>Fundamental contradiction: Serial processing vs. real-time requirements</h3>
<li>Must wait: first listen, then think; only after thinking can one speak.</li>
<li>Blocking wait: Every link becomes a bottleneck</li>
<p>  - user finish speaking(VAD)→speech recognition(ASR)→ complete sentence</p>
<p>  - complete sentence → llm with thinking → complete output after thinking</p>
<p>  - complete thinking → split into sentences→Speech synthesis(TTS) → voice response</p>
<li>cumulative delay: The total delay far exceeds human tolerance</li>
<h3>The dilemma of fast versus slow response</h3>
<p>fast response make mistakes easily and slow response burns the users’ patience.</p>
<p>unable to Anticipate and deliberate while listening</p>
<h3>technology bottleneck</h3>
<p>perception phase</p>
<li>voice:Waiting for the entire sentence to end before converting to text results in high latency; feeding fragmented speech into the speech recognition model leads to low recognition accuracy.</li>
<li>vision:High prefill latency for 2K token screenshots</li>
<p>thinking phase</p>
<li>Complete input is required to think.</li>
<li>Unable to predict user intent.</li>
<li>Test-time scaling exacerbates the delay.</li>
<p>execution phase</p>
<li>only can act when think ends</li>
<li>Every step of the GUI operation requires taking a new screenshot for consideration.</li>
<h1>architecture innovation:SEAL(Streaming,Event-driven Agent Loop)</h1>
<p>Core idea: Abstract all interactions into asynchronous event streams to achieve low-latency, interruptible real-time interaction.</p>
<p>1. perception layer</p>
<p>Converting continuous real-world signals (speech, GUI video) into discrete event streams</p>
<p>1. thinking layer</p>
<p>Async event processing, think while listening, speak while thinking, generate interleaved sequences of thought and action.</p>
<p>1. execution layer</p>
<p>Converting discrete action commands back into continuous real-world signals (TTS voice, mouse movements)</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Two+dark+clouds+over+Agent_+re_1770869824890.png" alt="" />
<h2>Layer 1 perception layer</h2>
<p>input: sequential signal:voice stream,GUI video stream</p>
<p>output:speech_start,interrupt,laugh,speech_fragment,ui_change etc.</p>
<p>Streaming speech perception model replacing VAD+ASR</p>
<p>Streaming Speech-Aware Models Based on Open-Source Autoregressive LLMs</p>
<li>Unlike traditional ASR models such as Whisper, which use an autoregressive architecture, this approach reduces speech recognition latency.</li>
<p>  - Streaming processing of input speech tokens</p>
<p>  - Streaming text and acoustic events</p>
<li>Based on open-source LLM post-training</li>
<p>  - Retaining dialogue context and supporting in-context learning significantly improve the recognition accuracy of user personal information and domain-specific terminology.</p>
<p>  - With world knowledge and common sense, the recognition rate for brand names, amounts, etc., has significantly improved.</p>
<p>The output information is rich, encompassing not only text but also acoustic events.</p>
<p>Real-time transcription text segment</p>
<p>Special Tokens（Acoustic event）：</p>
<li><speak_start></li>
<li><speak_end></li>
<li><interrupt></li>
<li><emotion:happy></li>
<li><laugh><sigh></li>
<li><music></li>
<h2>Layer 2:thinking Layer</h2>
<p>Based on an event-driven loop, it enables interruptible and asynchronous listening while thinking, and speaking while thinking.</p>
<p>Input</p>
<p>discrete event stream(from event queue)</p>
<p>output</p>
<p>Interlaced thoughts and action commands</p>
<h2>core innovation:interactive ReAct</h2>
<p>traditional ReAct</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Two+dark+clouds+over+Agent_+re_1770869826813.png" alt="" />
<p>Interactive ReAct:</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Two+dark+clouds+over+Agent_+re_1770869827574.png" alt="" />
<h2>Interactive ReAct:Think while Listening</h2>
<p>traditional ReAct:Once interrupted, all previous thoughts are invalidated and must be started over from the beginning.</p>
<p>Interactive ReAct:Preserve the interrupted thought process and, after adding new user input, allow the model to continue thinking based on previous context.</p>
<h2>Interactive ReAct:Speak while Thinking</h2>
<p>Use "preludes" to strive for deep thinking about events and reduce first-character delay.</p>
<h2>Layer 3:Execution Layer</h2>
<p>Convert discrete action commands into continuous real-world signals.</p>
<p>Input</p>
<p>speak(…),click(…)</p>
<p>Output</p>
<p>sequential signal(Voice waveform, mouse trajectory)</p>
<h2>last mile for GUI operation</h2>
<p>The agent struggles to output coordinates. Solution: Draw inspiration from the VLA model in the field of Robotics and perform post-training on the model using RL, enabling it to directly output actions.</p>
<li>Option 1: The main model directly outputs mouse click coordinates.</li>
<li>Option 2:Train a standalone VLA model to mimic human mouse movement patterns:Adopting a closed-loop feedback model of "move, fine-tune, click.”</li>
<p>More human-like in speech synthesis: Generate labeled text, then produce speech with TTS.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Two+dark+clouds+over+Agent_+re_1770869828406.png" alt="" />
<h1>Agent learning from experience</h1>
<p>Paradigm 1: Post-Training</p>
<p>Method: Parameter Update (Post-training)</p>
<li>Update weights through gradient descent</li>
<li>Requires a large amount of annotated data</li>
<li>The model is fixed after training.</li>
<li>The learning process is slow and expensive.</li>
<p>Paradigm Two: In-Context Learning</p>
<p>Method: In-context Learning</p>
<li>Implicit learning through the attention mechanism.</li>
<li>Using long context as temporary memory</li>
<li>Learning effects are limited to the current conversation and are not permanent.</li>
<p>Paradigm Three: Externalized Learning</p>
<p>Method: Externalizing Knowledge and Processes</p>
<li>RAG: Efficient, Reliable, Hallucination-Free Knowledge</li>
<li>Tool-generation: Codify processes to achieve self-evolution.</li>
<li>Transcending the limitations of parametric knowledge</li>
<p>Best Practice: Contextual Embeddings + Contextual BM25+Reranking + Top-20 chunks</p>
<p>Fine-tuning vs. RAG: An Empirical Comparison of Knowledge Injection Methods</p>
<p>Based on the paper: Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs</p>
<a href="https://aclanthology.org/2024.emnlp-main.15.pdf">https://aclanthology.org/2024.emnlp-main.15.pdf</a>
<p>Core insight of the paper: RAG is not only more effective but also avoids the issues of knowledge forgetting and hallucinations that may arise from fine-tuning.</p>
<p>Tool Generation - Enabling Agent Self-Evolution</p>
<a href="https://arxiv.org/abs/2505.20286">https://arxiv.org/abs/2505.20286</a>
<p>Minimum Predefined Principle</p>
<li>Minimalist Architecture: Equipped with only a single core component (Web proxy)</li>
<li>Avoid over-engineering: Do not presuppose complex tools and workflows.</li>
<li>Generality first: Reduce domain-specific hardcoding</li>
<p>Maximum Self-Evolution Mechanism</p>
<p>core ability</p>
<p>1. Self-create tools: Generate new tools based on task requirements.</p>
<p>2. Capability Enhancement: Iteratively improve the performance of existing tools</p>
<p>3. Experience Reuse: Solidifying successful patterns into reusable components.</p>
<p>MCP-Zero Active Tool Discovery</p>
<p>Traditional methods dilemma:</p>
<li>Full injection: The complete toolset occupies a large number of tokens → Context explosion.</li>
<li>Static retrieval: Based on initial query selection, unable to predict task evolution. Debugging files requires file system + code analysis + command execution.</li>
<p>MCP-Zero: From Passive to Active</p>
<p>Core Concept: Enabling Agents to Proactively Identify Capability Gaps and Request Tools On-Demand</p>
<p>1. Active Tool Request: Agent generates structured requirements</p>
<p>2. Hierarchical Semantic Routing: First Filter Servers, Then Match Tools</p>
<p>3. Iterative Capability Expansion: Dynamically Discovering and Building Toolchains During Execution</p>
<p>Externalizing learning to transcend the limitations of attention is an inevitable trend.</p>
<p>The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. </p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[2025巴菲特股东大会笔记-en]]></title>
        <id>https://shemol.tech/2025-buffet-en</id>
        <link href="https://shemol.tech/2025-buffet-en"/>
        <updated>2025-05-05T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Notes from the 2025 Berkshire Hathaway annual meeting.]]></summary>
        <content type="html"><![CDATA[<h1>Notes from the 2025 Berkshire Hathaway Annual Meeting</h1>
<strong>I’m from New Jersey. I’m grateful you picked me to ask a question—it’s a great opportunity. You often remind us of investing principles and patience. Could you give us another hint on that?</strong>
<blockquote>Sometimes the window closes fast—you have to decide on the spot. In 1966 I got a call—details aside—a woman offered to sell her husband’s company for $6 million: about $2 million in assets and 900+ lines of business, with perhaps $2 million pretax profit a year. The price sounded attractive.</blockquote>
<blockquote>Charlie and I discussed it immediately. Charlie didn’t know her, but knew her partner Ben Rosser. We guessed she might be a wealthy widow, or her husband needed a quick exit. Until December 31 we were still going through the books trying to understand her motive.</blockquote>
<blockquote>Next morning Will Phillips called to remind us: East Coast folks can be biased about Midwesterners. If she was from Iowa, her style might differ from the East’s. With a deal yielding ~33% a year, patience is genuinely hard.</blockquote>
<blockquote>That taught me: when a real opportunity lands, you don’t keep waiting. When something profitable and sensible appears, move decisively. Patience matters, but so does spotting the chance and acting. Markets never wait forever.</blockquote>
<blockquote>Patience is an important trait, but when opportunity flashes by—maybe a five-second phone call—you have to judge instantly whether to take it. In business, self-doubt at the wrong moment is deadly; hesitation loses deals. That’s why I find business fascinating—it’s my greatest joy.</blockquote>
<blockquote>I’m in my nineties and wealthier than most, yet I still wake up eager to get to the office. It’s not just a job—it’s helping people and creating value. That enthusiasm can be passed on; I hope my children feel it too.</blockquote>
<blockquote>Like Charlie and I over sixty years: we work with people who share our values. That model never failed us and is how we pick partners. That’s why the board and team here mesh. When you’re sure an opportunity is sound, don’t hesitate—act.</blockquote>
<strong>I’m from California—thanks for putting on this meeting. You’ve said aside from Steve Jobs, no one could have built Apple, yet Tim Cook has done wonderfully. You built Berkshire—Greg Abel seems an unexpected talent to some, though “normal” is a compliment. Why do you think Greg will be a superb successor for decades?</strong>
<blockquote>That’s a crucial question. Building a strong investment team isn’t easy. In a market as large as the U.S., the environment for capital allocation takes years to mature—especially on deployment. It needs time and a circle of people who trust each other. I’ve approached every decision cautiously, weighing risk.</blockquote>
<blockquote>Yesterday at the exhibition I saw passionate employees who aren’t in it for pay—they love the work. That’s admirable. Picking work you love matters. I had five bosses in my career; each taught me something. I chose to build my own path because doing what you love is the best state.</blockquote>
<blockquote>Not everyone is lucky enough to find a lifelong passion at seven or eight. Glenn Miller’s band was obscure until 1941—then the style clicked. If you find real passion young, don’t obsess over starting salary—but pick the right company and boss; some jobs aren’t worth taking.</blockquote>
<blockquote>We live in a great country in a great time—that’s why I’m handing the baton to Greg. Building something like Berkshire takes decades. Financially: “You only need to get rich once—don’t take dumb risk.” Some people borrow and speculate, hoping for a greater fool. That ends badly.</blockquote>
<blockquote>If I could rerun life, I’d still choose work I love. So far it’s been a wonderful ride.</blockquote>
<blockquote>To the earlier question: even if no perfect opportunity shows up yet, don’t panic. Life brings the right timing and the right people. Like finding a partner—you might fall in love at first sight, but missing one person doesn’t mean no one else fits. Worthwhile people and moments often arrive exactly when they should.</blockquote>
<strong>I’m from Maryland—thanks for your time today. I’m young and want to invest. What lessons did you learn early? Any advice for developing my own investment philosophy?</strong>
<blockquote>Excellent question—I wish someone had told me this young. It’s about <strong>who you surround yourself with</strong>—don’t expect every call to be perfect. If life has a direction, seek partners you respect. Like friends I’ve worked with in recent years—smaller than Berkshire, but aligned values matter. Many of us learn that too late.</blockquote>
<blockquote>Instead of copying billionaires, find thinkers you genuinely admire. I learned from strong people in practice. If you already have meaningful work without urgent money pressure, spend time with wise people like Charlie did—people creating value beyond their job. Sharing success with such partners is luck. If you haven’t found them, keep searching.</blockquote>
<blockquote>When I knocked on GEICO’s door I had no idea who was inside; ten minutes later I met someone who changed my life. Never forget those who helped—repay them in deeds. Sometimes things go wrong. If you’re in a good environment, cherish it. Being born in the U.S. already beats most of the world—8 billion people, ~330 million Americans. Still: never betray your principles to please others.</blockquote>
<blockquote>Investing is fun for me. Many leave after making money; you should seek work you can love for life. People like Tom Murphy—rare—who at 98 still spot talent. To improve, find mentors. Berkshire benefited: Sandy Gottesman from 1963, Walter Scott 30+ years, Greg Abel 25 years… walking with such people is never wrong.</blockquote>
<blockquote>Interestingly it may even extend life. My partners and I have been oddly long-lived—maybe the Coke (laughs)—but more likely because we do what we love. Happy people tend to live longer; that’s my experience.</blockquote>
<strong>Mr. Buffett, Ajit Jain, Greg Abel—I'm Pete Chen from Shanghai; this is my first Berkshire meeting. My question is about the ups and downs of life. Have you had a lowest point, and how did you push through?</strong>
<blockquote>Everyone has highs and lows—that’s normal. Thanks for the question; my lows may seem trivial. Take Charlie: he had hard stretches too. That’s life—no perpetual smooth sailing.</blockquote>
<blockquote>I’m not claiming the best advice, but lows recur for everyone. Yours may feel heavier; hitting bottom isn’t the end of the world. I promise you won’t be destroyed. Some get mocked in hard times; truly great people trust a turn will come even when luck is bad. So “luck” isn’t only luck.</blockquote>
<blockquote>If you’re in a trough—say health—it’s hard to describe. Remember we live in a good era. A century or five centuries ago, fate could have been brutal. Generations of progress got us here. Twenty years ago more was outside personal control; today we can respond more wisely.</blockquote>
<blockquote>Focus on what’s good in life. Bad things happen—that’s inevitable—but a good life is still possible in difficulty. That’s my view.</blockquote>
<blockquote>Personally, in 94 years I’ve never had a truly awful stretch; many friends say the same. I drink Coke when I want, do what I want—so far, so good.</blockquote>
<blockquote>Example: NFL players peak around 30–40 but accept that arc. Pick an industry knowing its rhythm. Same for baseball—each position has its challenge.</blockquote>
<blockquote>Charlie and I often note the body doesn’t need extreme exercise—we stay healthy without burning out. The athlete analogy is about <strong>emphasizing the positive</strong>. If you want longevity and you’re lucky enough—like you, traveling far, still energetic, learning from smart people—you’re ahead of almost everyone in past centuries. That’s what I wanted to share.</blockquote>
<strong>Dear Mr. Buffett—I’m Alisa from Poland, now in Chicago. Your story from a freezing January 74 years ago still moves me—in 1951 you took an eight-hour train from New York to Washington on a Saturday to learn insurance, and found Kotter’s office closed. That drive guided me. At 15 in 2011 I wrote asking to meet; you replied you had maybe 3,000 days left. More than 5,000 days later, from 1951 to now, your enthusiasm still inspires me. I ask again: a quarter of your time—even one hour in your office? I know you’re busy; as a survivor of hardship in Poland I choose friends carefully but sincerely. Don’t refuse—40,000 people here support this. We honor you openly. May I humbly ask once more: would you share any hour of your life? Thank you for your time.</strong>
<blockquote>That’s wonderful—wait—you don’t need my full biography; I know my story. Thanks for such an interesting question in front of 40,000 people. A lesson from youth:</blockquote>
<blockquote>Early on I drove state to state cold-calling companies. I was young; there were no IR departments—often the CEO met you. I feared rejection, then learned: when asking for a meeting, say <strong>“just ten minutes”</strong>—unless <em>they</em> want longer. Keep control of the time box.</blockquote>
<blockquote>That reminds me of the classic coal question: “Stranded on an island ten years—which competitor’s stock?” Managers love talking rivals—kids comparing toys. I learned to steer them to <strong>their</strong> edge.</blockquote>
<blockquote>Today orgs are complex puzzles; IR teams push “buy our stock” and grow huge. You must understand companies <strong>your</strong> way. Berkshire has its philosophy—we publish plenty—but we can’t interview 40,000 requests.</blockquote>
<blockquote>I admire your persistence; honestly, this is what we can offer. Your effort is admirable; rules must be fair for everyone.</blockquote>
<p>Recommended documentary: <em>Becoming Katharine Graham</em></p>
<strong>At the 2017 meeting we discussed investing in large tech. Today Microsoft, Apple, Amazon, etc. don’t need external funding—they sit on huge cash and pour resources into AI. Compared with the past, has your view shifted on their balance sheets and capital allocation—especially cash piles and the AI pivot?</strong>
<blockquote>Yes—these profits come from heavy capital deployment. Every business needs capital. Coca-Cola’s bottling needs big upfront equipment; ongoing capex is modest for strong returns. Distribution needs even less—an excellent, durable model.</blockquote>
<blockquote>Insurance is special: P&C needs capital for guarantees but invests float. Done well, it’s very rewarding. Apple barely needs outside funding and keeps buying back stock—volatile price, solid model.</blockquote>
<blockquote>Many in investing got rich managing <strong>other people’s</strong> money for fees—even mediocre performance pays; stars attract more capital. That’s the system; no need to moralize.</blockquote>
<blockquote>Charlie and I chose to earn with investors’ capital while sharing risk—one of the best models, though it can be abused; we’ve seen that in the U.S. and Canada.</blockquote>
<strong>I’m 13 from Florida; my brother is 15; we’re here with our dad—thanks for hosting. First time at your meeting. Which high school courses most help with becoming a great investor—could you elaborate?</strong>
<blockquote>Teachers often shape you deepest. I was lucky in school and learned from bosses and elders too. My father in investing—Saturdays I watched him work. I read investing books other kids skipped.</blockquote>
<blockquote>At Omaha Public Library I found a 19th-century investing book; later more rarities in New York. I love reading, but not like Charlie—asked whom I’d lunch with, always Charlie: a walking library pulling insights from books. Stay curious; find teachers you click with.</blockquote>
<blockquote>I attended three schools, then Washington University—each had two or three teachers who mattered. They taught and gave extra attention. Ben Graham was almost a father figure. <em>The Great Bridge</em> gave me a key insight.</blockquote>
<blockquote>Dad said everyone is unique—you may feel lost now but you’ll find a path. In school some teachers just fit—how they talk, how they teach. At Columbia Graham cared like a father.</blockquote>
<blockquote>At least ten mentors changed my life—they spent extra time on young people. Great education is often <strong>those</strong> relationships more than the institution. I’ve gone beyond the question, but that’s the heart of it.</blockquote>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[张维迎：如果天黑就出发，越走天越亮，谁都会有信心（转载）-en]]></title>
        <id>https://shemol.tech/zwy-en</id>
        <link href="https://shemol.tech/zwy-en"/>
        <updated>2025-04-13T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[If you set out while it’s still dark, the sky keeps brightening—everyone gains confidence.]]></summary>
        <content type="html"><![CDATA[<h1>Zhang Weiying: If you set out while it’s still dark, the sky keeps brightening—everyone gains confidence (repost)</h1>
<p>While scrolling saved posts on WeChat I found this piece; the original account had already taken it down, so I located the text and mirror it here for my own study only.</p>
<p>First published on the <strong>WSJ Chinese</strong> official account.</p>
<p>For a long stretch Zhang Weiying felt lonely. The intellectual tide shifted violently; for decades few echoed the ideas he held—but he didn’t waver, only regretted it. In recent years he quietly noticed younger people drifting toward his views: “That makes me really happy.”</p>
<p>It’s been thirteen years since he stepped down as dean of Peking University’s Guanghua School of Management, and the controversies around him have thinned. That gives him more time to sort his thinking—and to revise and push his earlier arguments further.</p>
<strong>“Entrepreneurship”</strong> and <strong>“the market economy”</strong> are the keys to reading Zhang. Today he thinks markets rarely mint mega-fortunes; they mostly give ordinary people a shot at a decent life—which is already precious.
<p>He increasingly believes the real point of a market economy is to channel the most creative, most ambitious people so they “can only do good for humanity, not harm.” He stands opposite the foundational homo economicus of economics: “I’m disappointed in human nature,” he says; the market, to him, is a mechanism that constrains it. “If we can’t govern ourselves, let the market economy do it.”</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869795643.jpg" alt="" />
<p>Under market logic the right people should land in the right seats—and entrepreneurs play the central part. “We’ve achieved a lot; everyone sees it came from reform and opening, and entrepreneurs mattered,” Zhang says. From new digital sectors, the internet, e-commerce, to manufacturing, private firms earned it: “We export so many low-cost goods—that’s entrepreneurs’ work.”</p>
<p>To Zhang the market is like air: unnoticed until it’s gone, then you realize you can’t live without it.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869796404.jpg" alt="" />
<p>He was born in a mountain valley—Xinzhuang Village, Wubu County, Shaanxi. He never felt embarrassed; if anything it’s wealth. A farmer’s son, his references are <em>Ordinary World</em> or the household-responsibility reform. “<em>Ordinary World</em> is our region; the story sits close to my hometown.”</p>
<p>He cites a scene: in Shuangshui Village, Secretary Tian Futang’s job was ringing the bell for work. One morning the fields stayed silent—only the bell echoed. People had already gone out on their own. After decollectivization, peasants kept more of what they earned; motivation jumped.</p>
<p>Zhang uses it to show how institutions and incentives move the economy. “Without the household contract, no amount of bell-ringing would stir real initiative.” Many believe demand can be printed, confidence manufactured, growth stimulated. To Zhang development is organic: “If you set out while it’s still dark, the sky keeps brightening—everyone gains confidence.”</p>
<p>Among economists Zhang is unusual: he speaks to the public and keeps researching—both matter; he won’t drop either. Recent books <em>Rethinking Entrepreneurship</em> and <em>Looking Back</em> collect fresh thoughts on the former; the latter, in fluent prose, remembers parents, teachers, childhood friends—emotion on the page.</p>
<p>Like his writing, conversation doesn’t feel like “famous economist”—no airs, smiling, soft-spoken. On video he showed up on time in a down vest. Years in Beijing haven’t erased his northwest accent.</p>
<p>He last surfaced unexpectedly for a piece on <em>Xintianyou</em>—mourning his mentor He Liancheng. Under COVID travel limits he couldn’t attend the funeral and wrote “Teacher He, hear my <em>Xintianyou</em> once more,” with lyrics attached:</p>
<p>“The first time you stroked my head gently; the last time you smiled and stayed silent. You rejoiced for me and worried for me; you once praised how I sang <em>Xintianyou</em>.”</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869797232.jpg" alt="" />
<p>Before the 1977 college entrance exam resumed, Zhang returned to the village after high school—as Communist Youth League secretary and deputy militia instructor—without knowing arts vs. science tracks. He entered the new political-economy program at Northwest University; He Liancheng led it, enrolling fifty that first year—that extra quota became Zhang’s bridge to college and a new life. The whole village saw him off; the family treated neighbors to millet cake stew.</p>
<p>“Without Teacher He pushing expansion, I might never have gone to college.” Zhang remembers the debt even though the quota wasn’t aimed at him personally. Under He, Northwest created a new economics major—fifty lives rerouted.</p>
<p>Even after joining Peking University he visited He when he could. “He laid my foundations.” He graduated in 1951 and spent decades at Northwest; only after 1977 did he mentor cohorts he treated like children.</p>
<p>As Guanghua dean, Zhang’s reforms met resistance; He wrote PKU’s president—without knowing him—because they shared Hunan roots, trying once more for his student.</p>
<p>When He passed, Zhang drafted lyrics and asked fellow northwestern singer Ding Wenjun to set and sing them. Ding sent a demo, offered a studio version, stitched images into a video posted on a junior’s account—and it blew up. People thought Zhang sang; he denied it: the credits list the singer, “but nobody reads to the end.”</p>
<em>Looking Back</em> isn’t “writing projects,” he says—it’s people he owes or can’t forget, “things that had to come out; my head couldn’t hold them.”
<p>He’s stubborn, somewhat a loner among scholars—no cliques, no patrons, only what he believes. Even with He’s influence he never founded a “school”; he dislikes that. He cares less what others think now—only whether he can face himself.</p>
<p>Past sixty, around fifty he felt he “heard heaven’s mandate,” understood what decades had been for and what should come next—a clear accounting to himself. He’s been trying to shift public opinion, much of it before he framed it consciously. “At least don’t pretend.” With nothing left to prove, posture matters less.</p>
<p>In that sense Zhang is freer.</p>
<p>Below is our conversation with him:</p>
<strong>WSJ:</strong> “Rational man” underpins classical economics. How do you view the rational-agent assumption?
<strong>Zhang Weiying:</strong> Rationality has many flavors—not one. Horizon matters too: a thief is “rational”; so is an entrepreneur—utterly different. I don’t judge individuals; I ask why they choose what they choose.
<p>Graduates from places like PKU and Tsinghua heading into business—that’s a country’s luck. If they all scramble into the bureaucracy, that’s misfortune.</p>
<p>When the economy is alive and more people want to create and start companies, opportunity abounds. The next few years worry me for graduates. As dean I watched placements closely—if someone had multiple offers and could choose, I was glad.</p>
<p>Now I hear two people might share zero offers—that scares me. Jobs multiply when entrepreneurial people can start freely. Markets widen choice; moods lift. Too many constraints sour mood; sour mood kills creativity. Creativity peaks when people feel free.</p>
<strong>WSJ:</strong> Has China entered a “dividing the pie” phase? Is the pie big enough?
<strong>Zhang Weiying:</strong> Economically, no country truly ends in a fixed-pie stage. I don’t believe in a pure “slice the pie” era. If society keeps growing the pie, baking <em>is</em> slicing—unfair slicing stops growth; fair-enough slicing lets it rise.
<p>We all gained from reform; the task is removing unfairness—win contracts by ability and effort, not guanxi. Guanxi contracts skew the split. Fix that—not envy someone’s slice today and kill tomorrow’s pie. Wealth shifts; towers that look huge may be worthless in three years. See Detroit—properties nobody wants. Wealth isn’t mass or area; it’s what markets can do with an asset. Idle assets aren’t wealth.</p>
<p>If someone “gifts” you a 747 but forbids flight or restaurants, is that asset plus or minus? Obviously minus—decay and maintenance devour you. Wealth moves; only value creation counts.</p>
<strong>WSJ:</strong> Post-COVID expectations for China’s economy—inevitable down leg?
<strong>Zhang Weiying:</strong> After decades of high growth, speed must fall. At today’s scale, holding ~3% steady would be impressive. Old models rode proven tech without R&D—easy fast growth. Near the innovation frontier, slowing is natural—not shameful; excellence slows too.
<p>The real question is whether the new speed holds. Three percent is possible but hard; slip to negative is possible—Argentina was “developed”; Brazil, Venezuela too.</p>
<strong>WSJ:</strong> “Stimulus” is treated as a cure-all for stagnation. What’s wrong with that mindset?
<strong>Zhang Weiying:</strong> How do you “stimulate”? Growth needs inner drive—rate cuts, subsidies don’t fix the core: impulse. Growth rides on entrepreneurs, not printing money.
<p>Some theories look elegant but cage thinking—worse damage. Many believe economies are “manageable”—twist knobs like a keyboard. Economies are spontaneous human drive.</p>
<strong>WSJ:</strong> How should we understand the market economy’s role in a healthy society? Beyond resource allocation—does that framing need updating?
<strong>Zhang Weiying:</strong> I’m not optimistic about human nature, so I want institutions that curb our worse sides and force correction. My view of markets may differ from others—and from my younger self. We used to say “resource allocation”; I think that’s wrong. Markets mean the most creative, ambitious people can only do good—if Musk harms customers or investors, he’s done. Mars trips that kill passengers end demand. We can’t trust ourselves; markets are the harness.
<strong>WSJ:</strong> You’re off the usual academic track—you invest heavily in public voice. Why?
<strong>Zhang Weiying:</strong> Mature fields splinter into technical silos—not all of it interests the public. Tastes and training differ; that’s fine.
<p>Serious work deserves respect—even hyper-quant colleagues. Not every scholar must deliver instant social impact.</p>
<p>But be honest: say what you believe, not what flatters. Responsibility is “this is what I think,” full stop.</p>
<p>Intellectuals tempted to force agreement when ignored—debate ideas, don’t deploy power. Coercion betrays liberal economics. Persuasion only; if others won’t listen, that’s that.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869798040.jpg" alt="" />
<strong>WSJ:</strong> Critique of capital seems to ebb; under global slowdown some even call 996 a “blessing.” How do you read that shift?
<strong>Zhang Weiying:</strong> I keep saying bosses often suffer more than staff—longer hours.
<p>In one interview a founder sold his factory but stayed to run it. Asked how life changed: “Before, month-end meant scrambling to make payroll; now month-end I’m happy—I collect pay.”</p>
<p>Employers and employees bear asymmetric risk, yet many still think bosses exploit them. Lately firms fail and workers lose jobs; suddenly people hope bosses don’t quit—when the boss walks, meals stop.</p>
<strong>WSJ:</strong> “Confidence matters more than gold.” Economies feel down worldwide—what do expectations mean now? How restore confidence?
<strong>Zhang Weiying:</strong> People fear distant horizons more than nearby obstacles. Walk 100 <em>li</em> starting 5 p.m.—each step darkens the path; fear grows. Start 5 a.m. in darkness and light returns—no fear. Afternoon walkers lose faith; morning walkers gain it. People watch the long road, not only today’s potholes.
<p>At the micro level, confidence needs <strong>agency</strong>—“my fate is mine.” I chose; I bear consequences—that breeds confidence.</p>
<p>Take <em>gaokao</em>: you may miss a top school but you don’t blame a hidden hand—that’s self-ownership. If placement were arbitrary or by lottery, effort would feel pointless.</p>
<p>Confidence is the feeling you can steer outcomes. Societies must let people believe effort shifts odds. If not, why try?</p>
<p>Entrepreneurs are risk-takers; none guarantees profit—only belief they can. If success correlates with effort, they grind.</p>
<p>Agency means accepting failure you earned—and trying again. If outcomes feel rigged, people quit after one loss.</p>
<strong>WSJ:</strong> From PKU and Guanghua—an elite cradle—you still radiate “bottom-up” concern many scholars lack. How do you hold both?
<strong>Zhang Weiying:</strong> People differ; I didn’t set out to perform compassion. I’m plain—I am what I am. Growing up, everyone you meet leaves traces; parents matter most.
<p>I wasn’t ashamed of the village; I’m grateful after everything I’ve seen. What I write is sincere—your history is part of you; cherish it.</p>
<p>Stay authentic—others see through pretense. I didn’t engineer “humanistic care”; it’s just who I am.</p>
<strong>WSJ:</strong> Young people face rent, jobs, stalled mobility; even PKU can feel unreachable—“hard for poor families to raise top students” looks real. Your read?
<strong>Zhang Weiying:</strong> First, decades of opening created huge mobility.
<p>Second, stratification worries me—if it hardens, we should care. Personally I’m not that pessimistic.</p>
<p>In a 2021 elective with ~300 students (including Tsinghua and Renmin), I polled: over 90% urban, under 10% rural—sounds dire. But when I asked parents’ origins, over 80% grew up rural. I wrote “Where do PKU students come from? Two hops from the farm”—first hop parents to cities, second hop kids to PKU. Mobility stayed large; rural kids still face weaker schools, but urbanized parents push the next generation—hence elite admissions.</p>
<p>The survey is suggestive; we still can’t ignore inequality.</p>
<em>Gaokao</em> is flawed but remains among the fairest institutions. I knew I couldn’t test into PKU—yet I taught there later. Fine by me.
<p>Famous entrepreneurs often rose from nothing—Ma Huateng, Jack Ma came from ordinary backgrounds; many rich lists started as farmers without college.</p>
<p>That’s why I defend markets: real markets enable vertical mobility through creativity and entrepreneurship. Schumpeter’s image—a luxury hotel always full, but names on the door keep changing—is a health test for society.</p>
<p>I’m not hopeless; that’s why I cherish market-oriented reform—ordinary people can rise; without markets they can’t.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869798799.jpg" alt="" />
<strong>WSJ:</strong> Is “entrepreneurship” imported? Does China have its own entrepreneurial DNA?
<strong>Zhang Weiying:</strong> Some people are never content—they want to do what others won’t or can’t; they take risks and face failure. That type always existed. Humanity left Africa because some were wired to wander.
<p>Usually we mean <strong>business</strong> entrepreneurship—commerce is hard and selective. Chinese tradition, though, “filed down” those people through the imperial exam—every incentive pointed to officialdom, so talent crowded government.</p>
<p>For society that’s a loss: government allocates wealth; business creates it. True entrepreneurs belong in firms—that’s a key ancient China vs. modern West contrast.</p>
<p>England had dissenters barred from the establishment who still chased commerce—more creative industry.</p>
<p>China’s huge shift after reform: excellent people went into business—but the culture remains fragile.</p>
<p>Since the 1980s I’ve wanted to change how the public sees commerce, entrepreneurs, and “modern ideas”—my “ten big shifts.”</p>
<p>As Guanghua dean few students chased civil service exams; now they scramble. Better brilliant people create wealth than allocate it.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869799547.jpg" alt="" />]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Eino-learning-notes-1-ChatModel-en]]></title>
        <id>https://shemol.tech/Eino-learning-notes-1-ChatModel-en</id>
        <link href="https://shemol.tech/Eino-learning-notes-1-ChatModel-en"/>
        <updated>2025-04-11T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Eino study notes—updated over time, or maybe not.]]></summary>
        <content type="html"><![CDATA[<h1>Eino Study Notes 1 — ChatModel</h1>
<p>ChatModel is Eino’s abstraction over conversational large language models. It provides a unified API for talking to different model backends (OpenAI, Ollama, and so on).</p>
<p>This component matters especially for:</p>
<li>Natural-language dialogue</li>
<li>Text generation and completion</li>
<li>Generating parameters for tool calls</li>
<li>Multimodal interaction (text, images, audio, etc.)</li>
<h1>Component definition</h1>
<h2>Interface definition</h2>
<blockquote>Source: <code>eino/components/model/interface.go</code></blockquote>
<p>``<code>go</p>
<p>type ChatModel interface {</p>
<p>    Generate(ctx context.Context, input []<em>schema.Message, opts ...Option) (</em>schema.Message, error)</p>
<p>    Stream(ctx context.Context, input []<em>schema.Message, opts ...Option) (</em>schema.StreamReader[*schema.Message], error)</p>
<p>    BindTools(tools []*schema.ToolInfo) error</p>
<p>}</p>
</code>`<code>
</code>Generate<code>
<li>Purpose: produce a complete model response in one shot.</li>
<li>Parameters:</li>
<p>  - </code>ctx<code>: context for request-scoped data and for passing the callback manager.</p>
<p>  - </code>input<code>: list of input messages.</p>
<p>  - </code>opts<code>: optional knobs for model behavior.</p>
<li>Returns:</li>
<p>  - </code>*schema.Message<code>: the model’s reply.</p>
<p>  - </code>error<code>: anything that went wrong during generation.</p>
</code>Stream<code>
<li>Purpose: stream the model response chunk by chunk.</li>
<li>Parameters: same as </code>Generate<code>.</li>
<li>Returns:</li>
<p>  - </code><em>schema.StreamReader[</em>schema.Message]<code>: reader for the streamed reply.</p>
<p>  - </code>error<code>: errors during streaming.</p>
</code>BindTools<code>
<li>Purpose: attach tools the model may call.</li>
<li>Parameters:</li>
<p>  - </code>tools<code>: tool metadata list.</p>
<li>Returns:</li>
<p>  - </code>error<code>: binding failures.</p>
<strong>Core role</strong> — This interface is the main abstraction for chat models and supports two call styles:
<li></code>Generate<code>: synchronous, full response (typical chat).</li>
<li></code>Stream<code>: streamed output (long text, live UX).</li>
<strong>Architectural traits</strong>
</code>`<code>go
<p>type ChatModel interface {</p>
<p>    // Synchronous generation (typical chat loop)</p>
<p>    Generate(ctx context.Context, input []<em>schema.Message, opts ...Option) (</em>schema.Message, error)</p>
<p>    // Streaming (good for incremental output)</p>
<p>    Stream(ctx context.Context, input []*schema.Message, opts ...Option) (</p>
<p>        <em>schema.StreamReader[</em>schema.Message], error)</p>
<p>    // Tool binding (extensibility / function calling)</p>
<p>    BindTools(tools []*schema.ToolInfo) error</p>
<p>}</p>
</code>`<code>
<strong>Design highlights</strong>
<li><strong>Multiple backends</strong> — One interface, many engines (OpenAI, MAAS, …).</li>
<li><strong>Context-aware</strong> — </code>context.Context<code> for deadlines, tracing, etc.</li>
<li><strong>Extensible options</strong> — </code>...Option<code> lets each implementation add config.</li>
<li><strong>Runtime tool binding</strong> — </code>BindTools<code> augments capabilities at runtime (e.g. function calling).</li>
<strong>Engineering practice</strong>
<p>Using </code>//go:generate<code> to produce </code>ChatModelMock<code> signals:</p>
<li>Interface-first design.</li>
<li>Strong unit-test support.</li>
<li>Dependency injection for different environments.</li>
<strong>Caveats</strong>
<li><strong>Concurrency</strong> — Comments warn that </code>BindTools<code> vs </code>Generate<code> may not be atomic; you may need synchronization.</li>
<li><strong>Message contract</strong> — Behavior depends on </code>schema.Message<code>; dig into the schema when needed.</li>
<li><strong>Stream lifecycle</strong> — Pair </code>StreamReader<code> with </code>Close<code> so resources are released.</li>
<h2></code>Message<code> struct</h2>
<blockquote>Source: </code>eino/schema/message.go<code></blockquote>
</code>`<code>go
<p>type Message struct {   </p>
<p>    // Role 表示消息的角色（system/user/assistant/tool）</p>
<p>    Role RoleType</p>
<p>    // Content 是消息的文本内容</p>
<p>    Content string</p>
<p>    // MultiContent 是多模态内容，支持文本、图片、音频等</p>
<p>    MultiContent []ChatMessagePart</p>
<p>    // Name 是消息的发送者名称</p>
<p>    Name string</p>
<p>    // ToolCalls 是 assistant 消息中的工具调用信息</p>
<p>    ToolCalls []ToolCall</p>
<p>    // ToolCallID 是 tool 消息的工具调用 ID</p>
<p>    ToolCallID string</p>
<p>    // ResponseMeta 包含响应的元信息</p>
<p>    ResponseMeta *ResponseMeta</p>
<p>    // Extra 用于存储额外信息</p>
<p>    Extra map[string]any</p>
<p>}</p>
</code>`<code>
</code>Message<code> is the basic unit for model I/O. It supports:
<li>Several roles: </code>system<code>, </code>user<code>, </code>assistant<code>, </code>tool<code>.</li>
<li>Multimodal parts: text, images, audio, video, files.</li>
<li>Tool calls: model-invoked tools and functions.</li>
<li>Metadata: finish reason, token usage, etc.</li>
<h2>Shared </code>Option<code>s</h2>
<p>The model component exposes common options:</p>
<blockquote>Source: </code>eino/components/model/option.go<code></blockquote>
</code>`<code>go
<p>type Options struct {</p>
<p>    // Temperature 控制输出的随机性</p>
<p>    Temperature *float32</p>
<p>    // MaxTokens 控制生成的最大 token 数量</p>
<p>    MaxTokens *int</p>
<p>    // Model 指定使用的模型名称</p>
<p>    Model *string</p>
<p>    // TopP 控制输出的多样性</p>
<p>    TopP *float32</p>
<p>    // Stop 指定停止生成的条件</p>
<p>    Stop []string</p>
<p>}</p>
</code>`<code>
<p>Set options like this:</p>
</code>`<code>go
<p>// 设置温度</p>
<p>WithTemperature(temperature float32) Option</p>
<p>// 设置最大 token 数</p>
<p>WithMaxTokens(maxTokens int) Option</p>
<p>// 设置模型名称</p>
<p>WithModel(name string) Option</p>
<p>// 设置 top_p 值</p>
<p>WithTopP(topP float32) Option</p>
<p>// 设置停止词</p>
<p>WithStop(stop []string) Option</p>
</code>`<code>
<h1>Usage</h1>
<h2>Standalone</h2>
</code>`<code>go
<p>import (</p>
<p>    "context"</p>
<p>    "fmt"</p>
<p>    "io"</p>
<p>    "github.com/cloudwego/eino-ext/components/model/openai"</p>
<p>    "github.com/cloudwego/eino/components/model"</p>
<p>    "github.com/cloudwego/eino/schema"</p>
<p>)</p>
<p>// 初始化模型 (以openai为例)</p>
<p>cm, err := openai.NewChatModel(ctx, &openai.ChatModelConfig{</p>
<p>    // 配置参数</p>
<p>})</p>
<p>// 准备输入消息</p>
<p>messages := []*schema.Message{</p>
<p>    {</p>
<p>       Role:    schema.System,</p>
<p>       Content: "你是一个有帮助的助手。",</p>
<p>    },</p>
<p>    {</p>
<p>       Role:    schema.User,</p>
<p>       Content: "你好！",</p>
<p>    },</p>
<p>}</p>
<p>// 生成响应</p>
<p>response, err := cm.Generate(ctx, messages, model.WithTemperature(0.8))</p>
<p>// 响应处理</p>
<p>fmt.Print(response.Content)</p>
<p>// 流式生成</p>
<p>streamResult, err := cm.Stream(ctx, messages)</p>
<p>defer streamResult.Close()</p>
<p>for {</p>
<p>    chunk, err := streamResult.Recv()</p>
<p>    if err == io.EOF {</p>
<p>       break</p>
<p>    }</p>
<p>    if err != nil {</p>
<p>       // 错误处理</p>
<p>    }</p>
<p>    // 响应片段处理</p>
<p>    fmt.Print(chunk.Content)</p>
<p>}</p>
</code>`<code>
<h2>Inside composition (chain / graph)</h2>
</code>`<code>go
<p>import (</p>
<p>    "github.com/cloudwego/eino/schema"</p>
<p>    "github.com/cloudwego/eino/compose"</p>
<p>)</p>
<p>/<em>*</em> 初始化ChatModel</p>
<p>* cm, err := xxx</p>
<p>*/</p>
<p>// 在 Chain 中使用</p>
<p>c := compose.NewChain[[]<em>schema.Message, </em>schema.Message]()</p>
<p>c.AppendChatModel(cm)</p>
<p>// 在 Graph 中使用</p>
<p>g := compose.NewGraph[[]<em>schema.Message, </em>schema.Message]()</p>
<p>g.AddChatModelNode("model_node", cm)</p>
</code>`<code>
<h1>Options and callbacks</h1>
<h2>Option example</h2>
</code>`<code>go
<p>import "github.com/cloudwego/eino/components/model"</p>
<p>// 使用 Option</p>
<p>response, err := cm.Generate(ctx, messages,</p>
<p>    model.WithTemperature(0.7),</p>
<p>    model.WithMaxTokens(2000),</p>
<p>    model.WithModel("gpt-4"),</p>
<p>)</p>
</code>`<code>
<h2>Callback example</h2>
</code>`<code>go
<p>import (</p>
<p>    "context"</p>
<p>    "fmt"</p>
<p>    "github.com/cloudwego/eino/callbacks"</p>
<p>    "github.com/cloudwego/eino/components/model"</p>
<p>    "github.com/cloudwego/eino/compose"</p>
<p>    "github.com/cloudwego/eino/schema"</p>
<p>    callbacksHelper "github.com/cloudwego/eino/utils/callbacks"</p>
<p>)</p>
<p>// 创建 callback handler</p>
<p>handler := &callbacksHelper.ModelCallbackHandler{</p>
<p>    OnStart: func(ctx context.Context, info <em>callbacks.RunInfo, input </em>model.CallbackInput) context.Context {</p>
<p>       fmt.Printf("开始生成，输入消息数量: %d\n", len(input.Messages))</p>
<p>       return ctx</p>
<p>    },</p>
<p>    OnEnd: func(ctx context.Context, info <em>callbacks.RunInfo, output </em>model.CallbackOutput) context.Context {</p>
<p>       fmt.Printf("生成完成，Token 使用情况: %+v\n", output.TokenUsage)</p>
<p>       return ctx</p>
<p>    },</p>
<p>    OnEndWithStreamOutput: func(ctx context.Context, info <em>callbacks.RunInfo, output </em>schema.StreamReader[*model.CallbackOutput]) context.Context {</p>
<p>       fmt.Println("开始接收流式输出")</p>
<p>       defer output.Close()</p>
<p>       return ctx</p>
<p>    },</p>
<p>}</p>
<p>// 使用 callback handler</p>
<p>helper := callbacksHelper.NewHandlerHelper().</p>
<p>    ChatModel(handler).</p>
<p>    Handler()</p>
<p>/<em>*</em> compose a chain</p>
<p>* chain := NewChain</p>
<p>* chain.appendxxx().</p>
<p>*       appendxxx().</p>
<p>*       ...</p>
<p>*/</p>
<p>// 在运行时使用</p>
<p>runnable, err := chain.Compile()</p>
<p>if err != nil {</p>
<p>    return err</p>
<p>}</p>
<p>result, err := runnable.Invoke(ctx, messages, compose.WithCallbacks(helper))</p>
</code>`<code>
<h1>Existing implementations</h1>
<p>1. OpenAI ChatModel — GPT family via OpenAI <a href="https://www.cloudwego.io/zh/docs/eino/ecosystem_integration/chat_model/chat_model_openai">ChatModel — OpenAI</a></p>
<p>2. Ollama ChatModel — local models via Ollama <a href="https://www.cloudwego.io/zh/docs/eino/ecosystem_integration/chat_model/chat_model_ollama">ChatModel — Ollama</a></p>
<p>3. ARK ChatModel — models on the ARK platform <a href="https://www.cloudwego.io/zh/docs/eino/ecosystem_integration/chat_model/chat_model_ark">ChatModel — ARK</a></p>
<h1>Implementing your own</h1>
<p>When you build a custom </code>ChatModel<code>:</p>
<p>1. Implement the shared options.</p>
<p>2. Wire up the callback hooks.</p>
<p>3. On streaming paths, close the writer when you are done.</p>
<h2>Option mechanism</h2>
<p>If you need options beyond the shared set, use the component helpers to define implementation-specific options, for example:</p>
</code>`<code>go
<p>import (</p>
<p>    "time"</p>
<p>    "github.com/cloudwego/eino/components/model"</p>
<p>)</p>
<p>// 定义 Option 结构体</p>
<p>type MyChatModelOptions struct {</p>
<p>    Options    *model.Options</p>
<p>    RetryCount int</p>
<p>    Timeout    time.Duration</p>
<p>}</p>
<p>// 定义 Option 函数</p>
<p>func WithRetryCount(count int) model.Option {</p>
<p>    return model.WrapImplSpecificOptFn(func(o *MyChatModelOptions) {</p>
<p>       o.RetryCount = count</p>
<p>    })</p>
<p>}</p>
<p>func WithTimeout(timeout time.Duration) model.Option {</p>
<p>    return model.WrapImplSpecificOptFn(func(o *MyChatModelOptions) {</p>
<p>       o.Timeout = timeout</p>
<p>    })</p>
<p>}</p>
</code>`<code>
<h2>Callback handling</h2>
<p>A </code>ChatModel<code> implementation should fire callbacks at the right times. The component defines:</p>
</code>`<code>go
<p>import (</p>
<p>    "github.com/cloudwego/eino/schema"</p>
<p>)</p>
<p>// 定义回调输入输出</p>
<p>type CallbackInput struct {</p>
<p>    Messages    []*schema.Message</p>
<p>    Model       string</p>
<p>    Temperature *float32</p>
<p>    MaxTokens   *int</p>
<p>    Extra       map[string]any</p>
<p>}</p>
<p>type CallbackOutput struct {</p>
<p>    Message    *schema.Message</p>
<p>    TokenUsage *schema.TokenUsage</p>
<p>    Extra      map[string]any</p>
<p>}</p>
</code>`<code>
<h1>End-to-end implementation sketch</h1>
</code>`<code>go
<p>import (</p>
<p>    "context"</p>
<p>    "errors"</p>
<p>    "net/http"</p>
<p>    "time"</p>
<p>    "github.com/cloudwego/eino/callbacks"</p>
<p>    "github.com/cloudwego/eino/components/model"</p>
<p>    "github.com/cloudwego/eino/schema"</p>
<p>)</p>
<p>type MyChatModel struct {</p>
<p>    client     *http.Client</p>
<p>    apiKey     string</p>
<p>    baseURL    string</p>
<p>    model      string</p>
<p>    timeout    time.Duration</p>
<p>    retryCount int</p>
<p>}</p>
<p>type MyChatModelConfig struct {</p>
<p>    APIKey string</p>
<p>}</p>
<p>func NewMyChatModel(config <em>MyChatModelConfig) (</em>MyChatModel, error) {</p>
<p>    if config.APIKey == "" {</p>
<p>       return nil, errors.New("api key is required")</p>
<p>    }</p>
<p>    return &MyChatModel{</p>
<p>       client: &http.Client{},</p>
<p>       apiKey: config.APIKey,</p>
<p>    }, nil</p>
<p>}</p>
<p>func (m <em>MyChatModel) Generate(ctx context.Context, messages []</em>schema.Message, opts ...model.Option) (*schema.Message, error) {</p>
<p>    // 1. 处理选项</p>
<p>    options := &MyChatModelOptions{</p>
<p>       Options: &model.Options{</p>
<p>          Model: &m.model,</p>
<p>       },</p>
<p>       RetryCount: m.retryCount,</p>
<p>       Timeout:    m.timeout,</p>
<p>    }</p>
<p>    options.Options = model.GetCommonOptions(options.Options, opts...)</p>
<p>    options = model.GetImplSpecificOptions(options, opts...)</p>
<p>    // 2. 开始生成前的回调</p>
<p>    ctx = callbacks.OnStart(ctx, &model.CallbackInput{</p>
<p>       Messages: messages,</p>
<p>       Config: &model.Config{</p>
<p>          Model: *options.Options.Model,</p>
<p>       },</p>
<p>    })</p>
<p>    // 3. 执行生成逻辑</p>
<p>    response, err := m.doGenerate(ctx, messages, options)</p>
<p>    // 4. 处理错误和完成回调</p>
<p>    if err != nil {</p>
<p>       ctx = callbacks.OnError(ctx, err)</p>
<p>       return nil, err</p>
<p>    }</p>
<p>    ctx = callbacks.OnEnd(ctx, &model.CallbackOutput{</p>
<p>       Message: response,</p>
<p>    })</p>
<p>    return response, nil</p>
<p>}</p>
<p>func (m <em>MyChatModel) Stream(ctx context.Context, messages []</em>schema.Message, opts ...model.Option) (<em>schema.StreamReader[</em>schema.Message], error) {</p>
<p>    // 1. 处理选项</p>
<p>    options := &MyChatModelOptions{</p>
<p>       Options: &model.Options{</p>
<p>          Model: &m.model,</p>
<p>       },</p>
<p>       RetryCount: m.retryCount,</p>
<p>       Timeout:    m.timeout,</p>
<p>    }</p>
<p>    options.Options = model.GetCommonOptions(options.Options, opts...)</p>
<p>    options = model.GetImplSpecificOptions(options, opts...)</p>
<p>    // 2. 开始流式生成前的回调</p>
<p>    ctx = callbacks.OnStart(ctx, &model.CallbackInput{</p>
<p>       Messages: messages,</p>
<p>       Config: &model.Config{</p>
<p>          Model: *options.Options.Model,</p>
<p>       },</p>
<p>    })</p>
<p>    // 3. 创建流式响应</p>
<p>    // Pipe产生一个StreamReader和一个StreamWrite，向StreamWrite中写入可以从StreamReader中读到，二者并发安全。</p>
<p>    // 实现中异步向StreamWrite中写入生成内容，返回StreamReader作为返回值</p>
<p>    // <em>*</em>StreamReader是一个数据流，仅可读一次，组件自行实现Callback时，既需要通过OnEndWithCallbackOutput向callback传递数据流，也需要向返回一个数据流，需要对数据流进行一次拷贝</p>
<p>    // 考虑到此种情形总是需要拷贝数据流，OnEndWithCallbackOutput函数会在内部拷贝并返回一个未被读取的流</p>
<p>    // 以下代码演示了一种流处理方式，处理方式不唯一</p>
<p>    sr, sw := schema.Pipe<a href="1">*model.CallbackOutput</a></p>
<p>    // 4. 启动异步生成</p>
<p>    go func() {</p>
<p>       defer sw.Close()</p>
<p>       // 流式写入</p>
<p>       m.doStream(ctx, messages, options, sw)</p>
<p>    }()</p>
<p>    // 5. 完成回调</p>
<p>    _, nsr := callbacks.OnEndWithStreamOutput(ctx, sr)</p>
<p>    return schema.StreamReaderWithConvert(nsr, func(t <em>model.CallbackOutput) (</em>schema.Message, error) {</p>
<p>       return t.Message, nil</p>
<p>    }), nil</p>
<p>}</p>
<p>func (m <em>MyChatModel) BindTools(tools []</em>schema.ToolInfo) error {</p>
<p>    // 实现工具绑定逻辑</p>
<p>    return nil</p>
<p>}</p>
<p>func (m <em>MyChatModel) doGenerate(ctx context.Context, messages []</em>schema.Message, opts <em>MyChatModelOptions) (</em>schema.Message, error) {</p>
<p>    // 实现生成逻辑</p>
<p>    return nil, nil</p>
<p>}</p>
<p>func (m <em>MyChatModel) doStream(ctx context.Context, messages []</em>schema.Message, opts <em>MyChatModelOptions, sr </em>schema.StreamWriter[*model.CallbackOutput]) {</p>
<p>    // 流式生成文本写入sr中</p>
<p>    return</p>
<p>}</p>
</code>``
<h1>References</h1>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[eino-learning-notes-2-en]]></title>
        <id>https://shemol.tech/eino-learning-notes-2-en</id>
        <link href="https://shemol.tech/eino-learning-notes-2-en"/>
        <updated>2025-04-11T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[More Eino notes—continuing to learn and thinking about joining a hackathon.]]></summary>
        <content type="html"><![CDATA[<h1>Eino Learning Notes 2</h1>
<h1>Components</h1>
<p>Three common patterns for LLM application development:</p>
<p>1. Direct chat: handle user input and generate replies.</p>
<p>2. Knowledge processing: semantic processing, storage, and retrieval of text documents.</p>
<p>3. Tool calling: decide based on context and invoke the right tools.</p>
<p>Eino abstracts common capabilities into reusable <strong>components</strong>.</p>
<p>Mapping between component abstractions and these patterns:</p>
<strong>Chat-oriented components</strong>
<p>1. Modular handling of parameters for LLM interaction: <code>ChatTemplate</code></p>
<p>2. Direct LLM interaction: <code>ChatModel</code></p>
<strong>Text semantics components</strong>
<p>1. Loading and processing text documents: <code>Document.Loader</code>, <code>Document.Transformer</code></p>
<p>2. Semantic processing of documents: <code>Embedding</code></p>
<p>3. Storing indices after embedding: <code>Indexer</code></p>
<p>4. Indexing and recalling semantically related documents: <code>Retriever</code></p>
<strong>Decision and execution components</strong>
<p>Component for the model to decide and call tools: <code>ToolsNode</code></p>
<strong>Custom components</strong>
<p>User-defined logic: <code>Lambda</code></p>
<p>Eino’s component design follows these principles:</p>
<p>1. <strong>Modularity and standardization</strong>: capabilities with the same role are abstracted into uniform modules; components have clear roles and boundaries and compose flexibly.</p>
<p>2. <strong>Extensibility</strong>: interfaces constrain modules as little as possible so component authors can implement custom components easily.</p>
<p>3. <strong>Reusability</strong>: the most common capabilities and implementations are packaged for out-of-the-box use.</p>
<h1>Chain & graph orchestration</h1>
<strong>Orchestration</strong>: compose and chain atomic <strong>component</strong> capabilities.
<li>Business logic must not leak into orchestration.</li>
<li>The core of LLM apps is composing and chaining components that provide atomic abilities; components are first-class citizens in orchestration.</li>
<li>From an abstraction standpoint, orchestration builds a <strong>network</strong> through which data flows; each node expects certain shapes/content for that data. A network that flows smoothly hinges on whether <strong>upstream and downstream data formats align</strong>.</li>
<li>Complexity of scenarios shows up in the complexity of the orchestration artifact; only <strong>horizontal governance</strong> keeps complex setups under control.</li>
<li>LLMs and LLM apps keep evolving fast; only <strong>extensible</strong> applications stay viable.</li>
<p>Eino provides a graph-based (edge + node) orchestration solution that:</p>
<li>Uses <strong>components as atomic nodes</strong></li>
<li>Uses <strong>type alignment between upstream and downstream</strong> as the foundation</li>
<li>Centers on components and standardizes how business functionality is encapsulated</li>
<li>Keeps business complexity inside components so the orchestration layer has a clearer global view</li>
<li>Offers <strong>aspect-like</strong> capabilities; callbacks support unified governance per node (what “aspect capability” means)</li>
<li>Provides <strong>call options</strong>—extensibility is a basic need for fast iteration</li>
<li>Emphasizes a <strong>type-aligned</strong> development style to reduce cognitive load and leverage Go’s type safety</li>
<li>Provides <strong>automatic stream conversion</strong> so “streams” drop off the list of complexity drivers (<strong>Eino stream programming</strong>)</li>
<hr />
<strong>Graph downside</strong>: the point-and-edge graph model requires <code>graph.AddXXXNode()</code> and <code>graph.AddEdge()</code> to wire a data path—powerful but a bit heavy.
<p>Eino wraps this with the easier <strong><code>Chain</code></strong>. <code>Chain</code> is a wrapper over <code>Graph</code> and exposes almost all <code>Graph</code> capabilities except <strong>cycles</strong>.</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[关于学习方式-倪爽（转载）-en]]></title>
        <id>https://shemol.tech/learning-ways-from-nishuang-en</id>
        <link href="https://shemol.tech/learning-ways-from-nishuang-en"/>
        <updated>2025-04-11T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Designer Ni Shuang on how he learns.]]></summary>
        <content type="html"><![CDATA[<h1>On How to Learn — Ni Shuang (repost)</h1>
<p>Original thread: <a href="https://x.com/nishuang/status/1787939646129008771">https://x.com/nishuang/status/1787939646129008771</a></p>
<p>I’ll share a learning method you could call “throw the kid in the water—if they don’t drown at first, they’ll not only teach themselves to swim but later ask whether swimming is even something you ‘learn’… <strong>practical learning</strong>.”  </p>
<a href="https://x.com/hashtag/%E6%B4%BB%E5%88%B0%E6%AD%BB%E5%AD%A6%E5%88%B0%E6%AD%BB?src=hashtag_click">#learn-for-life</a>
<p>What I did learning design is the same as everyone else in broad strokes—copy, drill, study, principles, methods, techniques… The <strong>differences</strong> are threefold:</p>
<li>I <strong>hate</strong> school-style drills (like rote vocab—fake exercises). I practice on <strong>real</strong> projects—for myself, my company, or clients.</li>
<p>Real cases keep me focused on design and avoid a lot of designers’ self-delusion and performative confidence.</p>
<li>I <strong>take the design job first</strong>, then learn the thinking and methods.</li>
<p>Sounds high pressure, but the difficulty is controllable—and it slowly builds <strong>real</strong> confidence. Nodding along in meetings, muttering “yes!” to yourself a few times—that’s mimicry, not confidence.</p>
<li><strong>Design first</strong>, then learn, then research, then imitate—stack experience in the field, then go back for targeted deep study; eventually you can imitate masters at strategy, experience, and methodology—not just surface moves.</li>
<p>Traditional sequencing is good at turning out hands—pixel pushers, coders. For creative work like design, <strong>learning efficiency beats classroom efficiency</strong>.</p>
<p>Sounds weird?</p>
<p>Lots of people grow with similar methods.</p>
<p>The upside of “throw the kid in the water” is it runs on <strong>strong positive feedback</strong>. For someone curious but impatient, who uses brains to fake grit, this looks hard but is self-driving.</p>
<p>To this day I still learn design every day—and still throw myself in the water daily.</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[如何看待关税和股市大跌？股神巴菲特这样说（转载）-en]]></title>
        <id>https://shemol.tech/buffett-on-tariffs-stock-market-en</id>
        <link href="https://shemol.tech/buffett-on-tariffs-stock-market-en"/>
        <updated>2025-04-09T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[How to think about tariffs and a plunging stock market?]]></summary>
        <content type="html"><![CDATA[<h1>How Should We View Tariffs and a Stock Crash? What Warren Buffett Has Said (repost)</h1>
<p>The U.S. imposed 104% tariffs on China; a senior reposted this piece. I’m mirroring it here for my own study only.</p>
<p>Original (Chinese): <a href="https://ec.ltn.com.tw/article/breakingnews/5005890">如何看待關稅和股市大跌？股神巴菲特這樣說</a></p>
<hr />
<p>〔Finance / wire〕U.S. President Trump’s reciprocal tariffs rattled global markets. As one of history’s best-known investors, Warren Buffett’s views draw constant attention. Foreign media compiled his past remarks and found he has spoken repeatedly on <strong>tariffs</strong> and <strong>stock declines</strong>; understanding his lens, the reporting suggests, may help investors navigate volatile markets.</p>
<strong>CNBC</strong> notes Buffett’s latest public comments on tariffs came in early March in a <strong>CBS</strong> interview with Norah O’Donnell. He said tariffs tend to push prices higher and that “over time, tariffs evolve into a tax on goods.” He even joked: “The tooth fairy isn’t paying for that!”
<p>The piece argues Buffett likely saw what was coming—first <strong>inflation</strong>. Asked in 2018 about Trump’s initially milder tariffs, he said duties on aluminum and steel, among others, had already raised costs for some subsidiaries. Inflation signs appeared before the new round of tariffs, but he said the tariff situation would <strong>worsen</strong> inflation.</p>
<p>Another risk he worried about is a <strong>trade war</strong>—tit-for-tat hikes between the U.S. and partners that could drag on global growth. In the March interview he even said tariffs are, in a sense, an act of war.</p>
<p>In 2019, as U.S.–China trade tension rose, he was blunter on <strong>CNBC</strong>: “If we really start a trade war, that’s bad for everyone, because the world economy is interconnected.”</p>
<p>After Trump’s latest tariff round, the S&P 500 fell though it had not yet entered a <strong>bear market</strong> (typically a 20%+ drop from a recent high). Analysts said a real bear could follow if investors fear a trade war might trigger global recession.</p>
<p>This isn’t Buffett’s first global downturn. In <strong>2008</strong>, as the financial crisis drove a bear market, he published a <em>New York Times</em> op-ed saying: “The financial world is a mess, both in the United States and abroad. What’s more, these problems are leading to a vicious cycle and have begun to spill into the real economy” and that in the short term unemployment would rise, business would stall, and headlines would only get scarier.</p>
<p>He continued: “So … I’ve been buying American stocks.”</p>
<p>He admitted he couldn’t predict the market’s next move—in fact, after his October 2008 piece the S&P kept falling for about five more months before bottoming.</p>
<p>As he often stresses, <strong>businesses as a whole</strong> keep innovating and, over the long run, improve earnings, which supports rising equity prices. In 2008 he noted many investors were unwilling to put capital at risk.</p>
<p>He argued worrying about the long-term prosperity of solid businesses is pointless: “Those businesses will indeed see ups and downs in profits, as in the past. But in 5, 10, or 20 years, most large companies will still hit new profit highs.”</p>
<p>He prefers buying when stocks are relatively cheap so long-term returns are higher. In 2008 he wrote: “In short, bad news is an investor’s best friend. It lets you buy a slice of America’s future at a discount.”</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[RPC-learning-notes-en]]></title>
        <id>https://shemol.tech/RPC-learning-notes-en</id>
        <link href="https://shemol.tech/RPC-learning-notes-en"/>
        <updated>2025-04-08T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[RPC learning notes.]]></summary>
        <content type="html"><![CDATA[<h1>RPC learning notes</h1>
<strong>RPC</strong> — Remote Procedure Call — solves communication in distributed systems. The key idea is invoking remote logic <strong>as if</strong> it were local. RPC is not only a “microservices/cloud-native” buzzword: whenever you cross the network, you may be using RPC.
<p>Examples:</p>
<li>Large distributed apps talk to message queues, distributed caches, databases, and config centers over RPC. <strong>etcd</strong> clients speak to the server with <strong>gRPC</strong>.</li>
<li><strong>Kubernetes</strong> is inherently distributed; <strong>kube-apiserver</strong> talks to cluster components over <strong>gRPC</strong>.</li>
<p>RPC touches:</p>
<li><strong>Serialization</strong>: objects ↔ bytes (and back), for cross-network / cross-language exchange.</li>
<li><strong>Compression</strong>: less data on the wire, lower bandwidth and latency.</li>
<li><strong>Protocol</strong>: rules for format and interaction—HTTP/2, TCP, UDP, etc.</li>
<li><strong>Dynamic proxies</strong>: hide remote-call plumbing so code looks like local methods—JDK proxies, bytecode enhancement.</li>
<li><strong>Service registry & discovery</strong>: track live instances, enable load balancing and failover—ZooKeeper, Consul, etcd store addresses and metadata.</li>
<li><strong>Encryption</strong>: confidentiality and integrity; mitigate MITM and tampering.</li>
<li><strong>Network I/O models</strong>: efficient, stable communication—connections, send/receive, and much more (peer lookup, connection setup, encode/decode, connection lifecycle). RPC wraps this stack so building distributed systems is simpler and safer.</li>
<p>At <strong>cluster</strong> scale you also care about:</p>
<li>Monitoring  </li>
<li>Circuit breaking & rate limiting  </li>
<li>Graceful startup/shutdown  </li>
<li>Multiple protocols  </li>
<li>Distributed tracing  </li>
<p>Where RPC frameworks really shine:</p>
<li>Connection management  </li>
<li>Health checks  </li>
<li>Load balancing  </li>
<li>Graceful lifecycle  </li>
<li>Retry on failure  </li>
<li>Traffic / tenant grouping  </li>
<li>Circuit breaking & rate limiting  </li>
<p>Without an RPC framework you’d still call another machine—but you’d hand-roll all of the above.</p>
<p>RPC <strong>hides network details</strong> so remote calls feel like in-process methods, without boilerplate unrelated to your domain.</p>
<p>Two big wins:</p>
<li>Blur the line between local and remote calls.  </li>
<li>Hide low-level networking so you focus on business logic.</li>
<h1>Serialization</h1>
<p>On the wire everything is bytes, yet call parameters are objects—you need a <strong>reversible</strong> mapping to binary.</p>
<p>Message <strong>headers</strong> usually carry protocol id, length, request type, serializer type, etc.; the <strong>body</strong> carries business fields and extensions.</p>
<h1>Deserialization</h1>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869763125.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869764196.png" alt="" />
<p>RPC-style stacks also underpin messaging, distributed caches, and databases.</p>
<p>RPC and HTTP both live at the <strong>application</strong> layer.</p>
<p>Before hitting the network, the client serializes the call arguments to bytes, writes them to a local socket, and the NIC sends them out.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869765391.png" alt="" />
<p>For an evolvable, backward-compatible protocol, lean on <strong>extensible header and payload fields</strong>.</p>
<p>Pick serializers per scenario.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869766673.png" alt="" />
<p>Common choices:</p>
<li><strong>JDK native</strong> serialization  </li>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869768004.png" alt="" />
<p>Every serializer is really a <strong>wire protocol</strong> design.</p>
<li><strong>JSON</strong>: key/value text, weak typing. Downsides: bulky on the wire; in Java you pay reflection cost. Fine only when payloads stay small.</li>
<li><strong>Hessian</strong>: dynamic, binary, compact, multi-language. Smaller/faster than JDK/JSON for many workloads. Caveats in stock Hessian: some Java types missing—Linked\* maps/sets (extend <code>CollectionDeserializer</code>), <code>Locale</code> (extend <code>ContextSerializerFactory</code>), <code>Byte</code>/<code>Short</code> widening to <code>Integer</code>, etc.</li>
<li><strong>Protobuf</strong>: Google’s cross-language structured format. You define <strong>IDL</strong>, compile stubs per language. Strengths: small payloads, clear semantics without XML parsers, fast encode/decode without reflection per field, decent evolution story. (Some Java-centric tools mirror Protobuf wire format without separate IDL files; corner cases exist.)</li>
<p>Also <strong>MessagePack</strong>, <strong>Kryo</strong>, etc. Selection factors:</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869769098.png" alt="" />
<strong>Hessian</strong> vs <strong>Protobuf</strong> are common defaults: both score well on performance, CPU, size, generality, compatibility, and security. Hessian is often easier for Java object graphs; Protobuf wins on efficiency and portability.
<p>Watch out for:</p>
<li>Overly deep/nested objects.  </li>
<li>Huge messages.  </li>
<li>Parameter types your serializer cannot handle.  </li>
<li>Deep inheritance hierarchies.</li>
<h1>Which I/O model?</h1>
<p>Common models:</p>
<li>Blocking IO (BIO)  </li>
<li>Non-blocking IO (NIO)  </li>
<li>I/O multiplexing  </li>
<li>Async IO (AIO)  </li>
<p>Only <strong>AIO</strong> is truly async from the app’s perspective; the rest are <strong>synchronous</strong> I/O with different waiting styles.</p>
<strong>Blocking IO</strong> is the default for sockets on Linux: the thread blocks from syscall until data is ready <strong>and</strong> copied to userspace—two phases: wait, then copy. One blocking socket often means one thread in classic Java servers.
<strong>I/O multiplexing</strong> (select/poll/epoll) powers high concurrency: NIO, Redis, Nginx, classic <strong>Reactor</strong>. Many sockets register with a multiplexer; <code>select</code> blocks until <strong>some</strong> socket is ready, then you <code>read</code>. More moving parts than plain blocking IO, but <strong>one thread</strong> can progress many sockets—blocking IO would need a thread per socket.
<p>Why BIO + multiplexing dominate: kernels and languages support them widely; signal-driven IO / true async IO need newer kernels. High-performance frameworks (e.g. <strong>Netty</strong>) are Reactor-style on multiplexing. For low QPS, blocking IO is still common.</p>
<strong>RPC servers</strong> usually pick <strong>multiplexing</strong> (plus <strong>Reactor</strong> frameworks like <strong>Netty</strong> on Java). On Linux, <strong>epoll</strong> matters; Windows lacks epoll.
<blockquote><strong>Reactor in one paragraph</strong>  </blockquote>
<blockquote>Event-driven networking: multiplexing (select/epoll/kqueue) watches many channels; the reactor loop dispatches readiness events to handlers—avoiding a thread per connection.  </blockquote>
<blockquote>- <strong>Reactor</strong>: central event loop + dispatcher.  </blockquote>
<blockquote>- <strong>Acceptor</strong>: accepts connections, registers channels.  </blockquote>
<blockquote>- <strong>Handlers</strong>: read/decode, business logic, encode/write—sometimes offloaded to thread pools.</blockquote>
<h1>Zero copy</h1>
<p>Kernel I/O still has <strong>wait</strong> and <strong>copy</strong> phases.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869770162.png" alt="" />
<p>A typical write copies user buffer → kernel buffer → NIC (DMA); reads reverse the path—<strong>two copies</strong> and <strong>two context switches</strong> per direction if you count user/kernel boundaries.</p>
<strong>Zero-copy</strong> avoids redundant user↔kernel copies; DMA still moves data to/from the NIC.
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869771154.png" alt="" />
<p>Two main patterns:</p>
<li><strong><code>mmap</code> + <code>write</code></strong>: map file pages into user space; skip one copy, still multiple transitions; good if you must touch bytes before send.  </li>
<li><strong><code>sendfile</code></strong>: kernel-to-kernel path, fewer syscalls; with <strong>SG-DMA</strong> sometimes only two DMA hops. No user visibility of bytes in flight—best for blind relay of large files.</li>
<p>Pick <strong><code>mmap+write</code></strong> if you must preprocess data; prefer <strong><code>sendfile</code></strong> (especially with SG-DMA) for pure forwarding.</p>
<strong>Netty “zero-copy”</strong> is mostly JVM-level: <code>CompositeByteBuf</code>, <code>slice</code>, <code>wrap</code> to avoid buffer copies; <code>FileRegion</code> + <code>FileChannel.transferTo()</code> mirrors Linux <code>sendfile</code>.
<h1>Dynamic proxies</h1>
<p>Interface + generated <strong>proxy</strong>: dependency injection binds the interface to a proxy that intercepts calls and performs the remote path. (I haven’t stepped through concrete code here.)</p>
<p>Priorities: fast proxy generation, small bytecode, hot-path efficiency, ergonomic APIs, active community, light dependencies.</p>
<strong>gRPC</strong>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869773779.png" alt="" />
<h2>Framing</h2>
<p>You need delimiters (“sentence breaks”) around each request’s binary payload so the peer can parse streams of calls—<strong>framing / protocol encapsulation</strong>.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869775929.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869777676.png" alt="" />
<h2>Service discovery: CP or AP?</h2>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869779051.png" alt="" />
<p>1. <strong>Registration</strong>: providers register endpoints with the registry.  </p>
<p>2. <strong>Subscription</strong>: consumers fetch and cache provider addresses for later calls.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869780595.png" alt="" />
<strong>DNS</strong> as discovery: all instances behind one name looks fine until you need fast <strong>add/remove</strong>. DNS TTL and caching mean callers rarely see new nodes or drops immediately—usually <strong>no</strong> to both “timely drain” and “instant scale-out”.
<h3>ZooKeeper-style</h3>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869781800.png" alt="" />
<p>1. Admin creates a root znode per service (e.g. <code>/service/com.demo.xxService</code>), with <code>provider</code> / <code>consumer</code> subtrees.  </p>
<p>2. Providers create <strong>ephemeral</strong> nodes under <code>provider</code> with metadata.  </p>
<p>3. Consumers create their own ephemeral nodes and <strong>watch</strong> the <code>provider</code> subtree.  </p>
<p>4. Any provider change pushes a notification to watchers.</p>
<p>ZK favors <strong>strong consistency</strong>—every update is replicated synchronously, which limits throughput.</p>
<h3>Eventually consistent registry (message bus)</h3>
<p>Callers can tolerate learning about new pods <strong>seconds</strong> later; brief zero-traffic windows are often acceptable. Trading <strong>CP</strong> for <strong>AP</strong> improves registry scale and resilience.</p>
<strong>Message-bus replication</strong>: each registry node holds a full in-memory cache; a registration event publishes on the bus; peers update and push new routes—<strong>eventual consistency</strong> across registry instances.
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869783000.png" alt="" />
<h1>Afterword</h1>
<p>Next steps: read <strong>gRPC</strong> and <strong>Kitex</strong> code, plus ByteDance cloud-native articles—but the deeper win is nailing fundamentals underneath any particular OSS project.</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[关于黄金作为投资物-en]]></title>
        <id>https://shemol.tech/gold-as-investment-en</id>
        <link href="https://shemol.tech/gold-as-investment-en"/>
        <updated>2025-04-05T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Gold is not a productive asset; notes from Buffett, Chen Jiahe, and others.]]></summary>
        <content type="html"><![CDATA[<h1>On gold as an investment</h1>
<p>I’m writing this to digest what I’ve read and put it in my own words—for my own learning only.</p>
<p>First, Buffett:</p>
<blockquote>I’ve never wanted to swap my stocks for gold. I’d rather bet on great businesses and trust that their intrinsic value will grow steadily. They’re run by strong managers and sell things people love today and will love tomorrow. Compared with digging metal out of the ground in South Africa, shipping and insuring it, and locking it in Fort Knox, people would rather spend their wages on See’s peanut brittle, Coca-Cola, or things like that.</blockquote>
<p>></p>
<blockquote>Although my father was keen on the gold standard, I’ve never been excited about gold. I’ve never really owned it, but I grew up in a gold-friendly household—I’ve given it a chance—and I still don’t see what its intrinsic value is. We sell gold items at Borsheim’s, but I would never sell stocks to buy gold. Trading <strong>productive assets</strong> for <strong>non-productive</strong> ones feels alien to me.</blockquote>
<p>></p>
<blockquote>In past shareholder letters Buffett also wrote:</blockquote>
<p>></p>
<blockquote>Besides cash-like assets, there’s another class that’s usually wrong to hold—assets that generate <strong>no</strong> cash flow and rely on someone paying more later: gold, art, antiques, etc. He calls them non-productive; the opposite is productive assets that throw off cash.</blockquote>
<p>></p>
<blockquote>He used gold to illustrate: <strong>Roughly 170,000 tons</strong> of gold exist worldwide; melted into a cube about <strong>21 meters</strong> on a side. Humanity dug it up, refined it, buried it again, and posts guards—it still produces nothing. People buy it hoping more buyers will pay more later.</blockquote>
<p>Gold produces no cash flow and only hopes the next person pays more—it’s non-productive.</p>
<p>Below is <strong>Chen Jiahe</strong>’s angle:</p>
<p>Gold doesn’t create value. Good stocks compound earnings; a cheap price lets you buy that stream cheaply. Gold doesn’t “grow” because you store it well.</p>
<p>Assets that compound year after year beat zero growth over long horizons—same message as Buffett.</p>
<p>Gold is hard to “trade up” with. With stocks you enjoy fundamentals <strong>and</strong> can rebalance into better risk/reward names, so your portfolio’s fundamental growth can outpace any single stock. Real estate and collectibles are similar. But <strong>all gold is the same</strong>—it’s so simple that mispricing is rare, so holders rarely gain extra “fundamental” upside from trading—the very engine value investors rely on.</p>
<p>Gold isn’t great “disaster insurance.” If society collapses to where gold matters, <strong>food (cans), medicine, and means of protection</strong> bought at today’s prices would beat gold.</p>
<p>I once told Sasa I wanted to buy gold during the pandemic—that idea was wrong. Glad to drop another bad instinct.</p>
<strong>Tang Erseng</strong> put it bluntly: buying gold is buying <strong>peace of mind</strong>—emotional value.
<p>Stable chemistry, scarcity, or social consensus alone doesn’t make something “precious.” (I still wonder: all three together—what about crypto? I’m too much of an outsider.)</p>
<p>Buying gold is buying emotional value—call it forced saving if you like.</p>
<strong>References</strong>
<li><a href="https://mp.weixin.qq.com/s/9cwjIGgqYPG-6DNk8Z-PAQ">投资闲谈：巴菲特谈黄金</a></li>
<li><a href="https://mp.weixin.qq.com/s/fffqAMS3jYIVyyNxpuAREw">猫猫看市：为啥我不爱黄金</a></li>
<li><a href="https://mp.weixin.qq.com/s/fffqAMS3jYIVyyNxpuAREw">我们买黄金到底是在买什么？</a></li>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[KubeEdge-Sedna源码解析（转载）-en]]></title>
        <id>https://shemol.tech/kubeedge-sedna-sourcecode-analysis-en</id>
        <link href="https://shemol.tech/kubeedge-sedna-sourcecode-analysis-en"/>
        <updated>2025-01-09T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Repost: Sedna source code walkthrough.]]></summary>
        <content type="html"><![CDATA[<h1>KubeEdge–Sedna source code analysis (repost)</h1>
<p>Original author: <a href="https://github.com/jaypume">jaypume</a>  </p>
<p>Original lecture video: <a href="https://www.bilibili.com/video/BV1hg4y1b78L">https://www.bilibili.com/video/BV1hg4y1b78L</a>  </p>
<p>Original article: <a href="https://github.com/jaypume/article/blob/main/sedna/%E8%BE%B9%E4%BA%91%E5%8D%8F%E5%90%8CAI%E6%A1%86%E6%9E%B6Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90/README.MD">https://github.com/jaypume/article/blob/main/sedna/边云协同AI框架Sedna源码解析/README.MD</a>  </p>
<p>Reposted for personal study and easier reference.</p>
<h1>KubeEdge–Sedna overview</h1>
<p>Sedna is an edge–cloud collaborative AI project incubated in the KubeEdge SIG AI. Building on KubeEdge’s edge–cloud capabilities, Sedna supports collaborative training and inference across edge and cloud—for example joint inference, incremental learning, federated learning, and lifelong learning. It works with widely used AI frameworks such as TensorFlow, PyTorch, and MindSpore, so existing AI workloads can migrate to Sedna with minimal friction to gain collaborative training and inference, with potential benefits in cost, model quality, and data privacy.</p>
<p>Project home:  </p>
<p>https://github.com/kubeedge/sedna  </p>
<p>Documentation:  </p>
<p>https://sedna.readthedocs.io</p>
<h2>Overall architecture</h2>
<p>Sedna’s edge–cloud collaboration relies on the following KubeEdge capabilities:</p>
<p>* Unified orchestration of applications across edge and cloud  </p>
<p>* Router: a reliable management-plane messaging channel between cloud and edge  </p>
<p>* EdgeMesh: data-plane cross-edge–cloud service discovery and traffic management  </p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/KubeEdge-Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869745557.png" alt="" />
<strong>Core components</strong>:
<li><strong>GlobalManager</strong></li>
<p>  - Unified management of edge–cloud collaborative AI jobs</p>
<p>  - Cross-edge–cloud coordination and management</p>
<p>  - Central configuration management</p>
<li><strong>LocalController</strong></li>
<p>  - Local control flow for collaborative AI jobs</p>
<p>  - Local common management: models, datasets, status sync, etc.</p>
<li><strong>Lib</strong></li>
<p>  - For AI and application developers: exposes collaborative AI capabilities to applications</p>
<li><strong>Worker</strong></li>
<p>  - Runs training or inference jobs—training/inference programs built on existing AI frameworks</p>
<p>  - Each feature maps to a worker group; workers can run on edge or cloud and cooperate</p>
<h2>Repository layout</h2>
<p>| Directory | Description |</p>
<p>| --- | --- |</p>
<p>| .github | Sedna GitHub CI/CD pipeline configuration. |</p>
<p>| LICENSES | Sedna licenses and related vendor licenses. |</p>
<p>| build | Dockerfiles for building GM/LC and other control-plane components; generated CRD YAML; sample CRD YAML. |</p>
<p>| cmd | Entrypoints for GM/LC control-plane binaries. |</p>
<p>| components | Monitoring and visualization components. |</p>
<p>| docs | Proposals and installation docs. |</p>
<p>| examples | Examples for joint inference, incremental learning, lifelong learning, and federated learning. |</p>
<p>| hack | Code generators and other scripts for developers. |</p>
<p>| lib | Sedna Library—Python dependency for building collaborative AI applications. |</p>
<p>| pkg | API definitions; generated client-go code for CRDs; core Sedna GM/LC control-plane code. |</p>
<p>| scripts | Installation scripts for users. |</p>
<p>| test | E2E tests and tooling. |</p>
<p>| vendor | Vendored third-party source. |</p>
<h1>Sedna control plane source (Go)</h1>
<h2>GM: Global Manager</h2>
<h3>GM as a Kubernetes operator</h3>
<strong>What is an operator?</strong>
<blockquote>An Operator is an application-specific controller that extends the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user. It builds upon the basic Kubernetes resource and controller concepts, but also includes domain or application-specific knowledge to automate common tasks better managed by computers. [1]</blockquote>
<p>For Sedna, the project governs how collaborative AI applications configure worker startup parameters, how they coordinate, and how data and artifacts flow. We can define it this way: <strong>Sedna GM is a domain-specific controller for “edge–cloud collaborative AI applications.”</strong></p>
<blockquote>The following components form the three main parts of an operator:</blockquote>
<blockquote>- <em>API</em>: The data that describes the operand’s configuration. The API includes:</blockquote>
<blockquote>  - <strong><em>Custom resource definition (CRD)</em></strong>, which defines a schema of settings available for configuring the operand.</blockquote>
<blockquote>  - <strong><em>Programmatic API</em></strong>, which defines the same data schema as the CRD and is implemented using the operator’s programming language, such as <a href="https://developers.redhat.com/blog/category/go/"><em>Go</em></a>.</blockquote>
<blockquote>  - <strong><em>Custom resource (CR)</em></strong>, which specifies values for the settings defined by the CRD; these values describe the configuration of an operand.</blockquote>
<blockquote>- <strong><em>Controller</em></strong>: The brains of the operator. The controller creates managed resources based on the description in the custom resource; controllers are implemented using the operator’s programming language, such as Go. [2]</blockquote>
<p>From Red Hat’s definition, the main pieces of a Kubernetes operator are CRD, API, CR, and Controller.</p>
<p>The following diagram illustrates the Sedna GM operator:</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/KubeEdge-Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869746491.jpg" alt="" />
<p>The following sections follow that breakdown—CR, CRD, API, and Controller—with Controller as the main control logic.</p>
<h3>CR</h3>
<p>Sedna supports collaborative inference, incremental learning, lifelong learning, and federated learning. For clarity, this article walks through <strong>lifelong learning</strong> using its concrete behavior and examples. The other three features share similar patterns in the codebase.</p>
<strong>CR example</strong>
<p>Below is a lifelong learning <a href="https://github.com/kubeedge/sedna/blob/main/build/crd-samples/sedna/lifelonglearningjobv1alpha1.yaml">CR sample</a>. You can create the corresponding lifelong learning object with <code>kubectl</code> from this CR; see <a href="https://github.com/kubeedge/sedna/tree/main/examples/lifelong_learning/atcii">this example</a> for full steps. Important fields:</p>
<li><code>dataset</code>: name of the dataset object (itself a CR).</li>
<li><code>trainSpec</code>: container settings for the training worker in lifelong learning—image, env, etc.</li>
<li><code>trigger</code>: conditions that start the training worker in lifelong learning.</li>
<li><code>evalSpec</code>: container settings for the evaluation worker—image, env, etc.</li>
<li><code>deploySpec</code>: container settings for the inference worker—image, env, etc.</li>
<li><code>outputDir</code>: where trained model artifacts are written in lifelong learning.</li>
<code>build/crd-samples/sedna/lifelonglearningjobv1alpha1.yaml</code>
<p>``<code>yaml</p>
<p>apiVersion: sedna.io/v1alpha1</p>
<p>kind: LifelongLearningJob</p>
<p>metadata:</p>
<p>  name: atcii-classifier-demo</p>
<p>spec:</p>
<p>  dataset:</p>
<p>    name: "lifelong-dataset"</p>
<p>    trainProb: 0.8</p>
<p>  trainSpec:</p>
<p>    template:</p>
<p>      spec:</p>
<p>        nodeName:  "edge-node"</p>
<p>        containers:</p>
<p>          - image: kubeedge/sedna-example-lifelong-learning-atcii-classifier:v0.3.0</p>
<p>            name:  train-worker</p>
<p>            imagePullPolicy: IfNotPresent</p>
<p>            args: ["train.py"]</p>
<p>            env:</p>
<p>              - name: "early_stopping_rounds"</p>
<p>                value: "100"</p>
<p>              - name: "metric_name"</p>
<p>                value: "mlogloss"</p>
<p>    trigger:</p>
<p>      checkPeriodSeconds: 60</p>
<p>      timer:</p>
<p>        start: 02:00</p>
<p>        end: 24:00</p>
<p>      condition:</p>
<p>        operator: ">"</p>
<p>        threshold: 500</p>
<p>        metric: num_of_samples</p>
<p>  evalSpec:</p>
<p>    template:</p>
<p>      spec:</p>
<p>        nodeName:  "edge-node"</p>
<p>        containers:</p>
<p>          - image: kubeedge/sedna-example-lifelong-learning-atcii-classifier:v0.3.0</p>
<p>            name:  eval-worker</p>
<p>            imagePullPolicy: IfNotPresent</p>
<p>            args: ["eval.py"]</p>
<p>            env:</p>
<p>              - name: "metrics"</p>
<p>                value: "precision_score"</p>
<p>              - name: "metric_param"</p>
<p>                value: "{'average': 'micro'}"</p>
<p>              - name: "model_threshold"</p>
<p>                value: "0.5"</p>
<p>  deploySpec:</p>
<p>    template:</p>
<p>      spec:</p>
<p>        nodeName:  "edge-node"</p>
<p>        containers:</p>
<p>        - image: kubeedge/sedna-example-lifelong-learning-atcii-classifier:v0.3.0</p>
<p>          name:  infer-worker</p>
<p>          imagePullPolicy: IfNotPresent</p>
<p>          args: ["inference.py"]</p>
<p>          env:</p>
<p>          - name: "UT_SAVED_URL"</p>
<p>            value: "/ut_saved_url"</p>
<p>          - name: "infer_dataset_url"</p>
<p>            value: "/data/testData.csv"</p>
<p>          volumeMounts:</p>
<p>          - name: utdir</p>
<p>            mountPath: /ut_saved_url</p>
<p>          - name: inferdata</p>
<p>            mountPath: /data/</p>
<p>          resources:</p>
<p>            limits:</p>
<p>              memory: 2Gi</p>
<p>        volumes:</p>
<p>          - name: utdir</p>
<p>            hostPath:</p>
<p>              path: /lifelong/unseen_task/</p>
<p>              type: DirectoryOrCreate</p>
<p>          - name: inferdata</p>
<p>            hostPath:</p>
<p>              path:  /data/</p>
<p>              type: DirectoryOrCreate</p>
<p>  outputDir: "/output"</p>
</code>`<code>
<h3>CRD</h3>
<p>A CRD is the template for CRs. Before you can create CRs of a kind, the CRD must be registered in the cluster. CRD YAML can be written by hand or generated; for complex CRDs, generation is recommended. Sedna uses kubebuilder’s <a href="https://book.kubebuilder.io/reference/controller-gen.html#controller-gen-cli">controller-gen</a>. The repo wraps this in scripts—run </code>make crds<code> to generate or refresh CRDs under </code>build/crds/<code>. See the </code>crds: controller-gen<code> target in </code>Makefile<code>.</p>
<p>A CRD must declare </code>group<code>, </code>version<code>, and </code>kind<code>—often shortened to <strong>GVK</strong>. CR instances are <strong>resources</strong>; loosely, Resource is like an object instance and Kind is like a class—so a Resource is an instance of a Kind. The table below maps lifelong learning CRD and CR to GVR/GVK:</p>
<p>|  | Group | Version | Resource | Kind |</p>
<p>| --- | --- | --- | --- | --- |</p>
<p>| CRD | apiextensions.k8s.io | v1 | lifelonglearningjobs.sedna.io | CustomResourceDefinition |</p>
<p>| CR | sedna.io | v1alpha1 | lifelonglearningjob | LifelongLearningJob |</p>
<p>In Kubernetes, resources are exposed via REST URIs organized as follows:</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/KubeEdge-Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869747439.jpg" alt="" />
<p>With these rules you can construct REST URIs for resources—useful when you cannot rely on kubectl or client-go. Examples:</p>
<p>Fetch the lifelong learning CRD via REST:</p>
</code>`<code>plain text
<p>curl -k --cert ./client.crt --key ./client.key https://127.0.0.1:5443/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/lifelonglearningjobs.sedna.io</p>
</code>`<code>
<p>List lifelong learning CRs via REST:</p>
</code>`<code>plain text
<p>curl -k --cert ./client.crt --key ./client.key https://127.0.0.1:5443/apis/sedna.io/v1alpha1/lifelonglearningjobs</p>
</code>`<code>
<p>Languages without an official Kubernetes client can wrap these REST patterns uniformly.</p>
<p>Key fields in Sedna’s lifelong learning CRD:</p>
<li></code>apiVersion: apiextensions.k8s.io/v1<code> — CRDs extend this API version.</li>
<li></code>kind: CustomResourceDefinition<code> — all CRDs use this kind.</li>
<li></code>spec.group: sedna.io<code> — API group for custom resources.</li>
<li></code>spec.names.kind: LifelongLearningJob<code> — the new resource type.</li>
<li></code>spec.names.shortNames: - ll<code> — </code>kubectl<code> short name for </code>LifelongLearningJob<code>.</li>
</code>build/crds/sedna.io_lifelonglearningjobs.yaml<code>
</code>`<code>yaml
<p>apiVersion: apiextensions.k8s.io/v1</p>
<p>kind: CustomResourceDefinition</p>
<p>metadata:</p>
<p>  annotations:</p>
<p>    controller-gen.kubebuilder.io/version: v0.4.1</p>
<p>  creationTimestamp: null</p>
<p>  name: lifelonglearningjobs.sedna.io</p>
<p>spec:</p>
<p>  group: sedna.io</p>
<p>  names:</p>
<p>    kind: LifelongLearningJob</p>
<p>    listKind: LifelongLearningJobList</p>
<p>    plural: lifelonglearningjobs</p>
<p>    shortNames:</p>
<p>    - ll</p>
<p>    singular: lifelonglearningjob</p>
<p>  scope: Namespaced</p>
<p>  versions:</p>
<p>  - name: v1alpha1</p>
<p>	...</p>
<p>status:</p>
<p>  acceptedNames:</p>
<p>    kind: ""</p>
<p>    plural: ""</p>
<p>  conditions: []</p>
<p>  storedVersions: []</p>
</code>`<code>
<h3>API</h3>
<p>The CRDs are auto-generated—where do the underlying API definitions live?  </p>
</code>pkg/apis/sedna/v1alpha1/lifelonglearningjob_types.go<code>
</code>`<code>go
<p>package v1alpha1</p>
<p>import (</p>
<p>	v1 "k8s.io/api/core/v1"</p>
<p>	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"</p>
<p>)</p>
<p>// Shown here:</p>
<p>// +genclient</p>
<p>// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object</p>
<p>// +kubebuilder:resource:shortName=ll</p>
<p>// +kubebuilder:subresource:status</p>
<p>// LifelongLearningJob API definition: primarily Spec and Status—desired vs observed state.</p>
<p>type LifelongLearningJob struct {</p>
<p>	metav1.TypeMeta   </code>json:",inline"<code></p>
<p>	metav1.ObjectMeta </code>json:"metadata"<code></p>
<p>	Spec              LLJobSpec   </code>json:"spec"<code></p>
<p>	Status            LLJobStatus </code>json:"status,omitempty"<code></p>
<p>}</p>
<p>// Parameters required when creating a LifelongLearningJob; extend lifelong-learning fields here.</p>
<p>type LLJobSpec struct {</p>
<p>	Dataset    LLDataset    </code>json:"dataset"<code></p>
<p>	TrainSpec  LLTrainSpec  </code>json:"trainSpec"<code></p>
<p>	EvalSpec   LLEvalSpec   </code>json:"evalSpec"<code></p>
<p>	DeploySpec LLDeploySpec </code>json:"deploySpec"<code></p>
<p>	// the credential referer for OutputDir</p>
<p>	CredentialName string </code>json:"credentialName,omitempty"<code></p>
<p>	OutputDir      string </code>json:"outputDir"<code></p>
<p>}</p>
<p>type LLDataset struct {</p>
<p>	Name      string  </code>json:"name"<code></p>
<p>	TrainProb float64 </code>json:"trainProb"<code></p>
<p>}</p>
<p>// Additional struct definitions are omitted.</p>
</code>`<code>
<p>Takeaways from the snippet:</p>
<li></code>// +kubebuilder...<code> — comments are inputs to kubebuilder and similar generators.</li>
<li></code>type LifelongLearningJob struct{...}<code> — top-level API for the CRD; holds Spec (desired) and Status (observed).</li>
<li></code>type LLJobSpec struct {...}<code> — fields for the CR; extend here when adding lifelong-learning parameters.</li>
<p>API types for joint inference, incremental learning, and federated learning live under </code>pkg/apis/sedna/v1alpha1/<code>.</p>
<strong>Regenerate client-go code</strong>
<p>After you add or change definitions in </code>*_types.go<code>, refresh generated clients:</p>
</code>`<code>plain text
<p>bash hack/update-codegen.sh</p>
</code>`<code>
<p>Generated code is under </code>pkg/client<code>:</p>
</code>`<code>plain text
<p>➜  pkg tree client -L 2</p>
<p>client</p>
<p>├── clientset</p>
<p>│   └── versioned</p>
<p>├── informers</p>
<p>│   └── externalversions</p>
<p>└── listers</p>
<p>    └── sedna</p>
</code>`<code>
<p>These clients are used throughout Controller logic.</p>
<strong>Regenerate CRD manifests</strong>
<p>After API changes, refresh CRD YAML:</p>
</code>`<code>plain text
<p>make crds</p>
</code>`<code>
<p>CRDs land in </code>build/crds<code>. Re-apply them to the cluster with </code>kubectl apply<code> so the cluster picks up changes.</p>
<h3>Controller</h3>
<p>The main lifelong-learning control logic lives in </code>pkg/globalmanager/controllers/lifelonglearning/lifelonglearningjob.go<code>—when train/eval workers run, how parameters sync to the edge, etc.</p>
<p>Before diving in, the overall call flow can be sketched as:</p>
</code>`<code>go
<p>cmd/sedna-gm/sedna-gm.go/main() 【1】</p>
<p>pkg/globalmanager/controllers/manager.go/New() 【2】 load GM config.</p>
<p>pkg/globalmanager/controllers/manager.go/Start() 【3】 start GM.</p>
<p>    - clientset.NewForConfig()：【4】 build Sedna CRD client from client-go.</p>
<p>    - NewUpstreamController()：【5】 one UpstreamController per GM process</p>
<p>    - uc.Run(stopCh)： goroutine loop handling</p>
<p>        - pkg/globalmanager/controllers/upstream.go/syncEdgeUpdate() </p>
<p>    - NewRegistry()：【6】 register all feature controllers.</p>
<p>        - f.SetDownstreamSendFunc()【7】</p>
<p>            -> pkg/globalmanager/controllers/lifelonglearning/downstream.go</p>
<p>        - f.SetUpstreamHandler()【8】</p>
<p>            -> pkg/globalmanager/controllers/lifelonglearning/upstream.go/updateFromEdge()</p>
<p>        - f.Run()【9】</p>
<p>    - ws.ListenAndServe() 【10】</p>
</code>`<code>
<p>The following subsections follow markers 【1】–【10】.</p>
<h3>【1】 </code>main<code> entrypoint</h3>
</code>sedna-gm.go<code> is the GM binary entry: logging setup, </code>app.NewControllerCommand()<code> parses flags and starts GM controllers.
</code>cmd/sedna-gm/sedna-gm.go<code>
</code>`<code>go
<p>func main() {</p>
<p>   rand.Seed(time.Now().UnixNano())</p>
<p>   command := app.NewControllerCommand()</p>
<p>   logs.InitLogs()</p>
<p>   defer logs.FlushLogs()</p>
<p>   if err := command.Execute(); err != nil {</p>
<p>      os.Exit(1)</p>
<p>   }</p>
<p>}</p>
</code>`<code>
<h3>【2】 Load GM configuration</h3>
<p>GM loads cluster config, WebSocket listen address/port, knowledge-base (KB) endpoints, etc.</p>
</code>pkg/globalmanager/controllers/manager.go<code>
</code>`<code>go
<p>// New creates the controller manager</p>
<p>func New(cc <em>config.ControllerConfig) </em>Manager {</p>
<p>   config.InitConfigure(cc)</p>
<p>   return &Manager{</p>
<p>      Config: cc,</p>
<p>   }</p>
<p>}</p>
</code>`<code>
</code>pkg/globalmanager/config/config.go<code>
</code>`<code>go
<p>// ControllerConfig indicates the config of controller</p>
<p>type ControllerConfig struct {</p>
<p>   // KubeAPIConfig indicates the kubernetes cluster info which controller will connected</p>
<p>   KubeConfig string </code>json:"kubeConfig,omitempty"<code></p>
<p>   // Master indicates the address of the Kubernetes API server. Overrides any value in KubeConfig.</p>
<p>   // such as https://127.0.0.1:8443</p>
<p>   // default ""</p>
<p>   Master string </code>json:"master"<code></p>
<p>   // Namespace indicates which namespace the controller listening to.</p>
<p>   // default ""</p>
<p>   Namespace string </code>json:"namespace,omitempty"<code></p>
<p>   // websocket server config</p>
<p>   // Since the current limit of kubeedge(1.5), GM needs to build the websocket channel for communicating between GM and LCs.</p>
<p>   WebSocket WebSocket </code>json:"websocket,omitempty"<code></p>
<p>   // lc config to info the worker</p>
<p>   LC LCConfig </code>json:"localController,omitempty"<code></p>
<p>   // kb config to info the worker</p>
<p>   KB KBConfig </code>json:"knowledgeBaseServer,omitempty"<code></p>
<p>   // period config min resync period</p>
<p>   // default 30s</p>
<p>   MinResyncPeriodSeconds int64 </code>json:"minResyncPeriodSeconds,omitempty"<code></p>
<p>}</p>
</code>`<code>
<h3>【3】 GM startup sequence</h3>
<p>Startup initializes the Sedna CRD client, wires edge–cloud messaging, starts per-feature controllers, and opens the WebSocket listener.</p>
</code>pkg/globalmanager/controllers/manager.go<code>
</code>`<code>go
<p>// Start starts the controllers it has managed</p>
<p>func (m *Manager) Start() error {</p>
<p>   ...</p>
<p>   // Initialize Sedna CRD client; controllers watch Sedna CR changes and react.</p>
<p>   sednaClient, err := clientset.NewForConfig(kubecfg)</p>
<p>   ...</p>
<p>   sednaInformerFactory := sednainformers.NewSharedInformerFactoryWithOptions(sednaClient, genResyncPeriod(minResyncPeriod), sednainformers.WithNamespace(namespace))</p>
<p>   // UpstreamController handles messages uploaded from edge LCs</p>
<p>   uc, _ := NewUpstreamController(context)</p>
<p>   downstreamSendFunc := messagelayer.NewContextMessageLayer().SendResourceObject</p>
<p>   stopCh := make(chan struct{})</p>
<p>   go uc.Run(stopCh)</p>
<p>   // For each feature (joint inference, lifelong learning, ...), bind message handlers</p>
<p>   for name, factory := range NewRegistry() {</p>
<p>      ...</p>
<p>      f.SetDownstreamSendFunc(downstreamSendFunc)</p>
<p>      f.SetUpstreamHandler(uc.Add)</p>
<p>      ...</p>
<p>      // Start that feature’s controller</p>
<p>      go f.Run(stopCh)</p>
<p>   }</p>
<p>   ...</p>
<p>   // GM WebSocket server, default 0.0.0.0:9000</p>
<p>   ws := websocket.NewServer(addr)</p>
<p>   ...</p>
<p>}</p>
</code>`<code>
<h3>【4】 Initialize the CRD client</h3>
</code>clientset.NewForConfig()<code> is implemented in </code>pkg/client/clientset/versioned/clientset.go<code>—generated from the Sedna API types for typed CRUD.
</code>New<code> for the lifelong-learning controller does roughly:
<li>Obtain the </code>LifelongLearningJob<code> informer—a local cache backed by the API server to reduce load.</li>
<li>Wire controller fields: Kubernetes client, Sedna client, shared GM config.</li>
<li>Register Add/Update/Delete handlers on </code>LifelongLearningJob<code>.</li>
</code>pkg/globalmanager/controllers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>// New creates a new LifelongLearningJob controller that keeps the relevant pods</p>
<p>// in sync with their corresponding LifelongLearningJob objects.</p>
<p>func New(cc *runtime.ControllerContext) (runtime.FeatureControllerI, error) {</p>
<p>   cfg := cc.Config</p>
<p>   podInformer := cc.KubeInformerFactory.Core().V1().Pods()</p>
<p>   // LifelongLearningJob informer</p>
<p>   jobInformer := cc.SednaInformerFactory.Sedna().V1alpha1().LifelongLearningJobs()</p>
<p>   eventBroadcaster := record.NewBroadcaster()</p>
<p>   eventBroadcaster.StartRecordingToSink(&v1core.EventSinkImpl{Interface: cc.KubeClient.CoreV1().Events("")})</p>
<p>   // Controller fields</p>
<p>   jc := &Controller{</p>
<p>      kubeClient: cc.KubeClient,</p>
<p>      client:     cc.SednaClient.SednaV1alpha1(),</p>
<p>      queue:      workqueue.NewNamedRateLimitingQueue(workqueue.NewItemExponentialFailureRateLimiter(runtime.DefaultBackOff, runtime.MaxBackOff), Name),</p>
<p>      cfg:        cfg,</p>
<p>   }</p>
<p>   // LifelongLearningJob Add/Update/Delete callbacks</p>
<p>   jobInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{</p>
<p>      AddFunc: func(obj interface{}) {</p>
<p>         jc.enqueueController(obj, true)</p>
<p>         jc.syncToEdge(watch.Added, obj)</p>
<p>      },</p>
<p>      UpdateFunc: func(old, cur interface{}) {</p>
<p>         jc.enqueueController(cur, true)</p>
<p>         jc.syncToEdge(watch.Added, cur)</p>
<p>      },</p>
<p>      DeleteFunc: func(obj interface{}) {</p>
<p>         jc.enqueueController(obj, true)</p>
<p>         jc.syncToEdge(watch.Deleted, obj)</p>
<p>      },</p>
<p>   })</p>
<p>   jc.jobLister = jobInformer.Lister()</p>
<p>   jc.jobStoreSynced = jobInformer.Informer().HasSynced</p>
<p>   // Pod Add/Update/Delete callbacks</p>
<p>   podInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{</p>
<p>      AddFunc:    jc.addPod,</p>
<p>      UpdateFunc: jc.updatePod,</p>
<p>      DeleteFunc: jc.deletePod,</p>
<p>   })</p>
<p>   jc.podStore = podInformer.Lister()</p>
<p>   jc.podStoreSynced = podInformer.Informer().HasSynced</p>
<p>   return jc, nil</p>
<p>}</p>
</code>`<code>
<p>The screenshot below shows other modules referencing the Sedna CRD client.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/KubeEdge-Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869748199.png" alt="" />
<h3>【5】 Message handling setup</h3>
</code>uc.Run()<code> drives the </code>UpstreamController<code>, which processes all messages from edges. A loop reads </code>context.upstreamChannel<code>; on each message it looks up </code>uc.updateHandlers[kind]<code> and dispatches. That map holds handlers for joint inference, incremental learning, federated learning, lifelong learning, etc.
</code>pkg/globalmanager/controllers/upstream.go<code>
</code>`<code>go
<p>// syncEdgeUpdate receives the updates from edge and syncs these to k8s.</p>
<p>func (uc *UpstreamController) syncEdgeUpdate() {</p>
<p>   for {</p>
<p>      select {</p>
<p>      case <-uc.messageLayer.Done():</p>
<p>         klog.Info("Stop sedna upstream loop")</p>
<p>         return</p>
<p>      default:</p>
<p>      }</p>
<p>      update, err := uc.messageLayer.ReceiveResourceUpdate()</p>
<p>	  ...</p>
<p>      handler, ok := uc.updateHandlers[kind]</p>
<p>      if ok {</p>
<p>         err := handler(name, namespace, operation, update.Content)</p>
<p>         ...</p>
<p>      }</p>
<p>   }</p>
<p>}</p>
</code>`<code>
</code>ReceiveFromEdge<code> blocks on a channel carrying </code>nodeMessage<code> values from edge LCs.
</code>pkg/globalmanager/messagelayer/ws/context.go<code>
</code>`<code>go
<p>// ReceiveResourceUpdate receives and handles the update</p>
<p>func (cml <em>ContextMessageLayer) ReceiveResourceUpdate() (</em>ResourceUpdateSpec, error) {</p>
<p>   nodeName, msg, err := wsContext.ReceiveFromEdge()</p>
<p>   ...</p>
<p>}</p>
</code>`<code>
<h3>【6】 Controller registry</h3>
</code>NewRegistry()<code> registers constructors for every feature; add a </code>New<code> function here when introducing a new collaborative capability.
</code>pkg/globalmanager/controllers/registry.go<code>
</code>`<code>go
<p>func NewRegistry() Registry {</p>
<p>   return Registry{</p>
<p>      ji.Name:      ji.New,</p>
<p>      fe.Name:      fe.New,</p>
<p>      fl.Name:      fl.New,</p>
<p>      il.Name:      il.New,</p>
<p>      ll.Name:      ll.New,</p>
<p>      reid.Name:    reid.New,</p>
<p>      va.Name:      va.New,</p>
<p>      dataset.Name: dataset.New,</p>
<p>      objs.Name:    objs.New,</p>
<p>   }</p>
<p>}</p>
</code>`<code>
<h3>【7】 Cloud → edge sync</h3>
</code>f.SetDownstreamSendFunc()<code> binds each feature’s </code>syncToEdge()<code> implementation.
<p>For lifelong learning, syncing roughly:</p>
<li>Resolve the node named on the </code>Dataset<code> CR.</li>
<li>Read train/eval/deploy node names from annotations.</li>
<li>Depending on the job stage, send to the appropriate node.</li>
</code>pkg/globalmanager/controllers/lifelonglearning/downstream.go<code>
</code>`<code>go
<p>func (c *Controller) syncToEdge(eventType watch.EventType, obj interface{}) error {</p>
<p>   // Dataset CR carries the target node name</p>
<p>   ds, err := c.client.Datasets(job.Namespace).Get(context.TODO(), dataName, metav1.GetOptions{})</p>
<p>   </p>
<p>   // Train / eval / deploy node names from annotations</p>
<p>   getAnnotationsNodeName := func(nodeName sednav1.LLJobStage) string {</p>
<p>      return runtime.AnnotationsKeyPrefix + string(nodeName)</p>
<p>   }</p>
<p>   ann := job.GetAnnotations()</p>
<p>   if ann != nil {</p>
<p>      trainNodeName = ann[getAnnotationsNodeName(sednav1.LLJobTrain)]</p>
<p>      evalNodeName = ann[getAnnotationsNodeName(sednav1.LLJobEval)]</p>
<p>      deployNodeName = ann[getAnnotationsNodeName(sednav1.LLJobDeploy)]</p>
<p>   }</p>
<p>   </p>
<p>   ...</p>
<p>   // Route by current stage</p>
<p>   switch jobStage {</p>
<p>   case sednav1.LLJobTrain:</p>
<p>      doJobStageEvent(trainNodeName)</p>
<p>   case sednav1.LLJobEval:</p>
<p>      doJobStageEvent(evalNodeName)</p>
<p>   case sednav1.LLJobDeploy:</p>
<p>      doJobStageEvent(deployNodeName)</p>
<p>   }</p>
<p>   return nil</p>
<p>}</p>
</code>`<code>
<h3>【8】 Edge → cloud sync</h3>
</code>f.SetUpstreamHandler()<code> binds each feature’s </code>updateFromEdge()<code>.
<p>For lifelong learning it:</p>
<li>Updates aggregate </code>LifelongLearningJob<code> status from per-edge progress.</li>
<li>Persists status to the API server (</code>Status<code> on the CR).</li>
<li>Parses JSON payloads from the edge—for example:</li>
</code>`<code>json
<p>{</p>
<p>    "phase": "train",</p>
<p>    "status": "completed",</p>
<p>    "output": {</p>
<p>        "models": [{</p>
<p>            "classes":  ["road", "fence"],</p>
<p>            "current_metric": null,</p>
<p>            "format": "pkl",</p>
<p>            "metrics": null,</p>
<p>            "url": "/output/train/1/index.pkl"</p>
<p>        }],</p>
<p>        "ownerInfo": null</p>
<p>    }</p>
<p>}</p>
</code>`<code>
</code>pkg/globalmanager/controllers/lifelonglearning/upstream.go<code>
</code>`<code>go
<p>// updateFromEdge syncs the edge updates to k8s</p>
<p>func (c *Controller) updateFromEdge(name, namespace, operation string, content []byte) error {</p>
<p>   var jobStatus struct {</p>
<p>      Phase  string </code>json:"phase"<code></p>
<p>      Status string </code>json:"status"<code></p>
<p>   }</p>
<p>   </p>
<p>   // Parse edge JSON</p>
<p>   err := json.Unmarshal(content, &jobStatus)</p>
<p>   ...</p>
<p>   cond := sednav1.LLJobCondition{</p>
<p>      Status:             v1.ConditionTrue,</p>
<p>      LastHeartbeatTime:  metav1.Now(),</p>
<p>      LastTransitionTime: metav1.Now(),</p>
<p>      Data:               string(condDataBytes),</p>
<p>      Message:            "reported by lc",</p>
<p>   }</p>
<p>   // Map edge status into LifelongLearningJob conditions</p>
<p>   switch strings.ToLower(jobStatus.Status) {</p>
<p>   case "ready":</p>
<p>      cond.Type = sednav1.LLJobStageCondReady</p>
<p>   case "completed":</p>
<p>      cond.Type = sednav1.LLJobStageCondCompleted</p>
<p>   case "failed":</p>
<p>      cond.Type = sednav1.LLJobStageCondFailed</p>
<p>   case "waiting":</p>
<p>      cond.Type = sednav1.LLJobStageCondWaiting</p>
<p>   default:</p>
<p>      return fmt.Errorf("invalid condition type: %v", jobStatus.Status)</p>
<p>   }</p>
<p>   // Write back Status on the LifelongLearningJob CR</p>
<p>   err = c.appendStatusCondition(name, namespace, cond)</p>
<p>   ...</p>
<p>}</p>
</code>`<code>
<h3>【9】 Core controller loop</h3>
</code>f.Run()<code> starts each feature controller. For lifelong learning, </code>Run()<code> waits for informer sync, then starts worker goroutines.
</code>pkg/globalmanager/controllers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>// Run starts the main goroutine responsible for watching and syncing jobs.</p>
<p>func (c *Controller) Run(stopCh <-chan struct{}) {</p>
<p>   workers := 1</p>
<p>   defer utilruntime.HandleCrash()</p>
<p>   defer c.queue.ShutDown()</p>
<p>   klog.Infof("Starting %s controller", Name)</p>
<p>   defer klog.Infof("Shutting down %s controller", Name)</p>
<p>   if !cache.WaitForNamedCacheSync(Name, stopCh, c.podStoreSynced, c.jobStoreSynced) {</p>
<p>      klog.Errorf("failed to wait for %s caches to sync", Name)</p>
<p>      return</p>
<p>   }</p>
<p>   klog.Infof("Starting %s workers", Name)</p>
<p>   for i := 0; i < workers; i++ {</p>
<p>      go wait.Until(c.worker, time.Second, stopCh)</p>
<p>   }</p>
<p>   <-stopCh</p>
<p>}</p>
</code>`<code>
</code>worker<code> calls </code>processNextWorkItem()<code> so </code>syncHandler<code> never runs the same key concurrently.
</code>pkg/globalmanager/controllers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>// worker runs a worker thread that just dequeues items, processes them, and marks them done.</p>
<p>// It enforces that the syncHandler is never invoked concurrently with the same key.</p>
<p>func (c *Controller) worker() {</p>
<p>   for c.processNextWorkItem() {</p>
<p>   }</p>
<p>}</p>
</code>`<code>
</code>processNextWorkItem()<code> invokes </code>sync()<code>.
</code>pkg/globalmanager/controllers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>func (c *Controller) sync(key string) (bool, error) {</p>
<p>   // Part of the implementation omitted</p>
<p>   ns, name, err := cache.SplitMetaNamespaceKey(key)</p>
<p>   sharedJob, err := c.jobLister.LifelongLearningJobs(ns).Get(name)</p>
<p>   // if job was finished previously, we don't want to redo the termination</p>
<p>   if IsJobFinished(&job) {</p>
<p>      return true, nil</p>
<p>   }</p>
<p>   // transit this job's state machine</p>
<p>   needUpdated, err = c.transitJobState(&job)</p>
<p>   if needUpdated {</p>
<p>      if err := c.updateJobStatus(&job); err != nil {</p>
<p>         return forget, err</p>
<p>      }</p>
<p>      if jobFailed && !IsJobFinished(&job) {</p>
<p>         // returning an error will re-enqueue LifelongLearningJob after the backoff period</p>
<p>         return forget, fmt.Errorf("failed pod(s) detected for lifelonglearningjob key %q", key)</p>
<p>      }</p>
<p>      forget = true</p>
<p>   }</p>
<p>   return forget, err</p>
<p>}</p>
</code>`<code>
</code>sync<code> handles a single job:
<li>Split the work-queue key into namespace and name.</li>
<li>Load the </code>LifelongLearningJob<code> via the lister.</li>
<li></code>transitJobState<code> advances train → eval → deploy as appropriate.</li>
<li>If status changed, </code>updateJobStatus<code> writes it back so </code>kubectl<code> reflects current phase, model paths, etc.</li>
<li>Handle failures and retries.</li>
</code>`<code>go
<p>// transit this job's state machine</p>
<p>needUpdated, err = c.transitJobState(&job)</p>
</code>`<code>
</code>transitJobState()<code> is the state machine—when each stage starts/stops. Use the diagram below together with the code.
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/KubeEdge-Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869748995.png" alt="" />
<h3>【10】 WebSocket server</h3>
<p>GM listens for edge messages (as in 【8】), defaulting to </code>0.0.0.0:9000<code>.</p>
</code>pkg/globalmanager/controllers/manager.go<code>
</code>`<code>go
<p>addr := fmt.Sprintf("%s:%d", m.Config.WebSocket.Address, m.Config.WebSocket.Port)</p>
<p>ws := websocket.NewServer(addr)</p>
<p>err = ws.ListenAndServe()</p>
</code>`<code>
<h2>LC: Local Controller</h2>
<p>LC runs on edge nodes for local job management and message relaying. The binary entry is </code>cmd/sedna-lc/sedna-lc.go<code> (same pattern as GM). Below is where feature managers are registered:</p>
</code>cmd/sedna-lc/app/server.go<code>
</code>`<code>go
<p>// runServer runs server</p>
<p>func runServer() {</p>
<p>   c := gmclient.NewWebSocketClient(Options)</p>
<p>   if err := c.Start(); err != nil {</p>
<p>      return</p>
<p>   }</p>
<p>   dm := dataset.New(c, Options)</p>
<p>   mm := model.New(c)</p>
<p>   jm := jointinference.New(c)</p>
<p>   fm := federatedlearning.New(c)</p>
<p>   im := incrementallearning.New(c, dm, mm, Options)</p>
<p>   lm := lifelonglearning.New(c, dm, Options)</p>
<p>   s := server.New(Options)</p>
<p>   for _, m := range []managers.FeatureManager{</p>
<p>      dm, mm, jm, fm, im, lm,</p>
<p>   } {</p>
<p>      s.AddFeatureManager(m)</p>
<p>      c.Subscribe(m)</p>
<p>      err := m.Start()</p>
<p>      if err != nil {</p>
<p>         klog.Errorf("failed to start manager %s: %v",</p>
<p>            m.GetName(), err)</p>
<p>         return</p>
<p>      }</p>
<p>      klog.Infof("manager %s is started", m.GetName())</p>
<p>   }</p>
<p>   s.ListenAndServe()</p>
<p>}</p>
</code>`<code>
<h3>Local job management</h3>
<p>The </code>Manager<code> struct models edge-side job management:</p>
</code>pkg/localcontroller/managers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>// LifelongLearningJobManager defines lifelong-learning-job Manager</p>
<p>type Manager struct {</p>
<p>   Client                 clienttypes.ClientI</p>
<p>   WorkerMessageChannel   chan workertypes.MessageContent</p>
<p>   DatasetManager         *dataset.Manager</p>
<p>   LifelongLearningJobMap map[string]*Job</p>
<p>   VolumeMountPrefix      string</p>
<p>}</p>
</code>`<code>
</code>startJob()<code> illustrates the flow:
<li>Watch synced </code>Dataset<code> objects—for example trigger training when sample count crosses a threshold.</li>
<li>Per stage, drive train/eval/deploy by reporting state up to GM; GM schedules the actual workloads rather than LC starting pods directly.</li>
</code>pkg/localcontroller/managers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>// startJob starts a job</p>
<p>func (lm *Manager) startJob(name string) {</p>
<p>   ...</p>
<p>    </p>
<p>   // Watch Dataset CRs synced to the edge</p>
<p>   go lm.handleData(job)</p>
<p>   tick := time.NewTicker(JobIterationIntervalSeconds * time.Second)</p>
<p>   for {</p>
<p>      // Drive train/eval/deploy by stage</p>
<p>      select {</p>
<p>      case <-job.JobConfig.Done:</p>
<p>         return</p>
<p>      case <-tick.C:</p>
<p>         cond := lm.getLatestCondition(job)</p>
<p>         jobStage := cond.Stage</p>
<p>         switch jobStage {</p>
<p>         case sednav1.LLJobTrain:</p>
<p>            err = lm.trainTask(job)</p>
<p>         case sednav1.LLJobEval:</p>
<p>            err = lm.evalTask(job)</p>
<p>         case sednav1.LLJobDeploy:</p>
<p>            err = lm.deployTask(job)</p>
<p>         default:</p>
<p>            klog.Errorf("invalid phase: %s", jobStage)</p>
<p>            continue</p>
<p>         }</p>
<p>		 ...</p>
<p>      }</p>
<p>   }</p>
<p>}</p>
</code>`<code>
<p>Beyond orchestration, LC also covers dataset monitoring, model downloads, and local persistence.</p>
<h3>Message proxy</h3>
<p>Besides pushing status to the cloud, LC serves HTTP on </code>0.0.0.0:9100<code>, aggregating messages from the Lib SDK before forwarding to GM. Route registration:</p>
</code>pkg/localcontroller/server/server.go<code>
</code>`<code>go
<p>// register registers api</p>
<p>func (s <em>Server) register(container </em>restful.Container) {</p>
<p>	ws := new(restful.WebService)</p>
<p>	ws.Path(fmt.Sprintf("/%s", constants.ServerRootPath)).</p>
<p>		Consumes(restful.MIME_XML, restful.MIME_JSON).</p>
<p>		Produces(restful.MIME_JSON, restful.MIME_XML)</p>
<p>	ws.Route(ws.POST("/workers/{worker-name}/info").</p>
<p>		To(s.messageHandler).</p>
<p>		Doc("receive worker message"))</p>
<p>	container.Add(ws)</p>
<p>}</p>
</code>`<code>
</code>pkg/localcontroller/server/server.go<code>
</code>`<code>go
<p>// messageHandler handles message from the worker</p>
<p>func (s <em>Server) messageHandler(request </em>restful.Request, response *restful.Response) {</p>
<p>   var err error</p>
<p>   workerName := request.PathParameter("worker-name")</p>
<p>   workerMessage := workertypes.MessageContent{}</p>
<p>   err = request.ReadEntity(&workerMessage)</p>
<p>   if workerMessage.Name != workerName || err != nil {</p>
<p>      var msg string</p>
<p>      if workerMessage.Name != workerName {</p>
<p>         msg = fmt.Sprintf("worker name(name=%s) in the api is different from that(name=%s) in the message body",</p>
<p>            workerName, workerMessage.Name)</p>
<p>      } else {</p>
<p>         msg = fmt.Sprintf("read worker(name=%s) message body failed, error: %v", workerName, err)</p>
<p>      }</p>
<p>      klog.Errorf(msg)</p>
<p>      err = s.reply(response, http.StatusBadRequest, msg)</p>
<p>      if err != nil {</p>
<p>         klog.Errorf("reply messge to worker(name=%s) failed, error: %v", workerName, err)</p>
<p>      }</p>
<p>   }</p>
<p>   if m, ok := s.fmm[workerMessage.OwnerKind]; ok {</p>
<p>      m.AddWorkerMessage(workerMessage)</p>
<p>   }</p>
<p>   err = s.reply(response, http.StatusOK, "OK")</p>
<p>   if err != nil {</p>
<p>      klog.Errorf("reply message to worker(name=%s) failed, error: %v", workerName, err)</p>
<p>      return</p>
<p>   }</p>
<p>}</p>
</code>`<code>
<h1>Sedna Lib source (Python)</h1>
<p>Lib is the Python SDK for AI and application developers to adapt existing code to edge–cloud collaboration.</p>
<p>Directory layout:</p>
</code>`<code>plain text
<p>➜  sedna tree lib -L 2</p>
<p>lib</p>
<p>├── __init__.py</p>
<p>├── MANIFEST.in</p>
<p>├── OWNERS</p>
<p>├── requirements.dev.txt</p>
<p>├── requirements.txt    // Sedna Python dependencies</p>
<p>├── sedna</p>
<p>│   ├── algorithms  // Collaborative algorithms</p>
<p>│   ├── backend     // Backends: TensorFlow, PyTorch, ...</p>
<p>│   ├── common</p>
<p>│   ├── core        // Feature implementations</p>
<p>│   ├── datasources // Formats such as txt, csv</p>
<p>│   ├── __init__.py</p>
<p>│   ├── README.md</p>
<p>│   ├── service     // Components that run servers (e.g. KB)</p>
<p>│   ├── VERSION</p>
<p>│   └── __version__.py</p>
<p>└── setup.py</p>
</code>`<code>
<p>Highlights by area:</p>
<h3></code>core<code></h3>
</code>core<code> wraps user callbacks. The </code>train<code> path below wires post-processing, cloud knowledge-base training/inference, KB updates (lifelong learning continuously refreshes models and samples), and LC reporting (completion, metrics).
</code>lib/sedna/core/lifelong_learning/lifelong_learning.py<code>
</code>`<code>python
<p>def train(self, train_data,</p>
<p>          valid_data=None,</p>
<p>          post_process=None,</p>
<p>          **kwargs):</p>
<p>    is_completed_initilization = \</p>
<p>        str(Context.get_parameters("HAS_COMPLETED_INITIAL_TRAINING",</p>
<p>                                   "false")).lower()</p>
<p>    if is_completed_initilization == "true":</p>
<p>        return self.update(train_data,</p>
<p>                           valid_data=valid_data,</p>
<p>                           post_process=post_process,</p>
<p>                           **kwargs)</p>
<p>    # Configure post-processing callback</p>
<p>    callback_func = None</p>
<p>    if post_process is not None:</p>
<p>        callback_func = ClassFactory.get_cls(</p>
<p>            ClassType.CALLBACK, post_process)</p>
<p>    res, seen_task_index = \</p>
<p>        self.cloud_knowledge_management.seen_estimator.train(</p>
<p>            train_data=train_data,</p>
<p>            valid_data=valid_data,</p>
<p>            **kwargs</p>
<p>        ) </p>
<p>    # Train against cloud knowledge base (seen + unseen paths)</p>
<p>    unseen_res, unseen_task_index = \</p>
<p>        self.cloud_knowledge_management.unseen_estimator.train()</p>
<p>    # Refresh cloud KB indices</p>
<p>    task_index = dict(</p>
<p>        seen_task=seen_task_index,</p>
<p>        unseen_task=unseen_task_index)</p>
<p>    task_index_url = FileOps.dump(</p>
<p>        task_index, self.cloud_knowledge_management.local_task_index_url)</p>
<p>    task_index = self.cloud_knowledge_management.update_kb(task_index_url)</p>
<p>    res.update(unseen_res)</p>
<p>    ...</p>
<p>    </p>
<p>    # Report status to LC (completion, metrics, ...)</p>
<p>    self.report_task_info(</p>
<p>            None, K8sResourceKindStatus.COMPLETED.value, task_info_res)</p>
<p>        self.log.info(f"Lifelong learning Train task Finished, "</p>
<p>                      f"KB index save in {task_index}")</p>
<p>        return callback_func(self.estimator, res) if callback_func else res</p>
<p>    </p>
<p>    ...</p>
</code>`<code>
<h3></code>backend<code></h3>
</code>MSBackend<code> shows how Sedna plugs in MindSpore: implement </code>train<code>, </code>predict<code>, and </code>evaluate<code> and Lib can treat a framework as a backend, enabling thin wrappers around existing AI code for collaboration.
</code>lib/sedna/backend/mindspore/__init__.py<code>
</code>`<code>python
<p>class MSBackend(BackendBase):</p>
<p>    def __init__(self, estimator, fine_tune=True, **kwargs):</p>
<p>        super(MSBackend, self).__init__(estimator=estimator,</p>
<p>                                        fine_tune=fine_tune,</p>
<p>                                        **kwargs)</p>
<p>        self.framework = "mindspore"</p>
<p>        if self.use_npu:</p>
<p>            context.set_context(mode=context.GRAPH_MODE,</p>
<p>                                device_target="Ascend")</p>
<p>        elif self.use_cuda:</p>
<p>            context.set_context(mode=context.GRAPH_MODE,</p>
<p>                                device_target="GPU")</p>
<p>        else:</p>
<p>            context.set_context(mode=context.GRAPH_MODE,</p>
<p>                                device_target="CPU")</p>
<p>        if callable(self.estimator):</p>
<p>            self.estimator = self.estimator()</p>
<p>    def train(self, train_data, valid_data=None, **kwargs):</p>
<p>        if callable(self.estimator):</p>
<p>            self.estimator = self.estimator()</p>
<p>        if self.fine_tune and FileOps.exists(self.model_save_path):</p>
<p>            self.finetune()</p>
<p>        self.has_load = True</p>
<p>        varkw = self.parse_kwargs(self.estimator.train, **kwargs)</p>
<p>        return self.estimator.train(train_data=train_data,</p>
<p>                                    valid_data=valid_data,</p>
<p>                                    **varkw)</p>
<p>    def predict(self, data, **kwargs):</p>
<p>        if not self.has_load:</p>
<p>            self.load()</p>
<p>        varkw = self.parse_kwargs(self.estimator.predict, **kwargs)</p>
<p>        return self.estimator.predict(data=data, **varkw)</p>
<p>    def evaluate(self, data, **kwargs):</p>
<p>        if not self.has_load:</p>
<p>            self.load()</p>
<p>        varkw = self.parse_kwargs(self.estimator.evaluate, **kwargs)</p>
<p>        return self.estimator.evaluate(data, **varkw)</p>
</code>`<code>
<h3></code>datasource<code></h3>
</code>datasource<code> packages common dataset parsers so callers avoid boilerplate.
</code>lib/sedna/datasources/__init__.py<code>
</code>`<code>python
<p>class CSVDataParse(BaseDataSource, ABC):</p>
<p>    """</p>
<p>    csv file which contain Structured Data parser</p>
<p>    """</p>
<p>    # Helpers to parse tabular datasets</p>
<p>    def parse(self, <em>args, </em>*kwargs):</p>
<p>        x_data = []</p>
<p>        y_data = []</p>
<p>        label = kwargs.pop("label") if "label" in kwargs else ""</p>
<p>        usecols = kwargs.get("usecols", "")</p>
<p>        if usecols and isinstance(usecols, str):</p>
<p>            usecols = usecols.split(",")</p>
<p>        if len(usecols):</p>
<p>            if label and label not in usecols:</p>
<p>                usecols.append(label)</p>
<p>            kwargs["usecols"] = usecols</p>
<p>        for f in args:</p>
<p>            if isinstance(f, (dict, list)):</p>
<p>                res = self.parse_json(f, **kwargs)</p>
<p>            else:</p>
<p>                if not (f and FileOps.exists(f)):</p>
<p>                    continue</p>
<p>                res = pd.read_csv(f, **kwargs)</p>
<p>            if self.process_func and callable(self.process_func):</p>
<p>                res = self.process_func(res)</p>
<p>            if label:</p>
<p>                if label not in res.columns:</p>
<p>                    continue</p>
<p>                y = res[label]</p>
<p>                y_data.append(y)</p>
<p>                res.drop(label, axis=1, inplace=True)</p>
<p>            x_data.append(res)</p>
<p>        if not x_data:</p>
<p>            return</p>
<p>        self.x = pd.concat(x_data)</p>
<p>        self.y = pd.concat(y_data)</p>
</code>`<code>
<h3></code>algorithms<code></h3>
<p>Sedna ships algorithms tuned for edge–cloud settings—for example cross-entropy thresholding to flag low-confidence detections. The goal is not only bundled baselines but an extensible surface for new algorithms that improve end-to-end training and inference.</p>
</code>lib/sedna/algorithms/hard_example_mining/hard_example_mining.py<code>
</code>`<code>python
<p>@ClassFactory.register(ClassType.HEM, alias="CrossEntropy")</p>
<p>class CrossEntropyFilter(BaseFilter, abc.ABC):</p>
<p>    """</p>
<p>    <strong>Object detection</strong> Hard samples discovery methods named </code>CrossEntropy<code></p>
<p>    Parameters</p>
<p>    ----------</p>
<p>    threshold_cross_entropy: float</p>
<p>        hard coefficient threshold score to filter img, default to 0.5.</p>
<p>    """</p>
<p>    def __init__(self, threshold_cross_entropy=0.5, **kwargs):</p>
<p>        self.threshold_cross_entropy = float(threshold_cross_entropy)</p>
<p>    def __call__(self, infer_result=None) -> bool:</p>
<p>        """judge the img is hard sample or not.</p>
<p>        Parameters</p>
<p>        ----------</p>
<p>        infer_result: array_like</p>
<p>            prediction classes list, such as</p>
<p>            [class1-score, class2-score, class2-score,....],</p>
<p>            where class-score is the score corresponding to the class,</p>
<p>            class-score value is in [0,1], who will be ignored if its</p>
<p>            value not in [0,1].</p>
<p>        Returns</p>
<p>        -------</p>
<p>        is hard sample: bool</p>
<p>            </code>True<code> means hard sample, </code>False<code> means not.</p>
<p>        """</p>
<p>        if not infer_result:</p>
<p>            # if invalid input, return False</p>
<p>            return False</p>
<p>        log_sum = 0.0</p>
<p>        data_check_list = [class_probability for class_probability</p>
<p>                           in infer_result</p>
<p>                           if self.data_check(class_probability)]</p>
<p>        if len(data_check_list) != len(infer_result):</p>
<p>            return False</p>
<p>        for class_data in data_check_list:</p>
<p>            log_sum += class_data * math.log(class_data)</p>
<p>        confidence_score = 1 + 1.0 * log_sum / math.log(</p>
<p>            len(infer_result))</p>
<p>        return confidence_score < self.threshold_cross_entropy</p>
</code>``
<hr />
<p>1. <a href="https://www.redhat.com/en/topics/containers/what-is-a-kubernetes-operator">https://www.redhat.com/en/topics/containers/what-is-a-kubernetes-operator</a></p>
<p>2. <a href="https://developers.redhat.com/articles/2021/06/22/kubernetes-operators-101-part-2-how-operators-work">https://developers.redhat.com/articles/2021/06/22/kubernetes-operators-101-part-2-how-operators-work</a></p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[炎拳-en]]></title>
        <id>https://shemol.tech/fire-punch-en</id>
        <link href="https://shemol.tech/fire-punch-en"/>
        <updated>2025-01-07T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Read Fire Punch. Loved it.]]></summary>
        <content type="html"><![CDATA[<h1>Fire Punch</h1>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869252463.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869253503.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869254609.png" alt="" />
<p>Couldn't hold it together in chapter one…</p>
<p>Switched to a PT source—this is how it’s translated…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869255654.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869256745.png" alt="" />
<p>LMAO…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869258164.png" alt="" />
<p>Love how Fujimoto draws eyes.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869259907.png" alt="" />
<p>The greater good, huh.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869260972.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869261944.png" alt="" />
<p>Because they eat human flesh, they have to kill them all…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869263274.png" alt="" />
<p>Terrifying…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869270079.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869271295.png" alt="" />
<p>So beautiful…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869272849.png" alt="" />
<p>The curse called living on…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869274123.png" alt="" />
<p>Even if you suffer every kind of pain, you have to resist death.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869275398.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869276861.png" alt="" />
<p>Ohhh, so that’s how it is.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869278216.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869279295.png" alt="" />
<p>Absolutely idiotic.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869280513.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869281851.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869283113.png" alt="" />
<p>Really?</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869284254.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869285521.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869286779.png" alt="" />
<p>Who’s this?</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869287905.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869289178.png" alt="" />
<p>It’s her brother…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869290705.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869292601.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869293806.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869295261.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869297407.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869298730.png" alt="" />
<p>I see… so that’s why everyone let San go…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869299979.png" alt="" />
<p>Is this explaining gender?</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869301158.png" alt="" />
<p>So decisive—chopped his own arm off right away. He can regenerate too…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869302417.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869303681.png" alt="" />
<p>???</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869305243.png" alt="" />
<p>Love it.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869310702.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869312172.png" alt="" />
<p>That’s way too twisted? A million question marks.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869313467.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869314785.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869316136.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869317501.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869318992.png" alt="" />
<p>…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869320002.png" alt="" />
<p>The covers all look great.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869322346.png" alt="" />
<p>The mysterious filmmaker.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869323264.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869324665.png" alt="" />
<p>Detail: picking his nose.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869325713.png" alt="" />
<p>Playing with people’s hearts!</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869326955.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869328078.png" alt="" />
<p>LMAO.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869333507.png" alt="" />
<p>So good, so good, so good—really love this panel and the dialogue.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869335220.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869336344.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869337802.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869339014.png" alt="" />
<p>The director’s right—this guy’s face is hilarious.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869340460.png" alt="" />
<p>LMAO last page they were just chatting, brief blackout and then this spread—so good.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869348828.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869350334.png" alt="" />
<p>Rage! So good!</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869351878.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869353222.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869354502.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869355580.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869356869.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869358078.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869359229.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869360536.png" alt="" />
<p>Made me frown hard…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869361928.png" alt="" />
<p>Already lost his mind…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869364110.png" alt="" />
<p>This guy’s pouring fuel on the fire…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869365435.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869366864.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869368752.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869370304.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869371955.png" alt="" />
<p>Hard truths sting! This one’s the real deal!</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869373086.png" alt="" />
<p>Exactly.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869374145.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869375347.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869376735.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869378123.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869379417.png" alt="" />
<p>Are we batteries too… Is this me…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869381702.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869382969.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869384476.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869386454.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869387914.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869389235.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869392545.png" alt="" />
<p>What a bastard…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869394470.png" alt="" />
<p>!!!!</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869395839.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869397127.png" alt="" />
<p>I’m dying laughing, for real.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869398485.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869399698.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869401115.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869402535.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869403920.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869407784.png" alt="" />
<p>Ah—for your movie…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869409034.png" alt="" />
<p>How did it come to this…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869410471.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869412011.png" alt="" />
<p>What a bastard…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869413128.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869414316.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869415792.png" alt="" />
<p>Gorgeous.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869417406.png" alt="" />
<p>Nice visuals.</p>
<p>Does Fujimoto smoke? There’s always smoking in the panels.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869418792.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869420166.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869421529.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869422569.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869423948.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869425237.png" alt="" />
<p>I want to save them!</p>
<p>See—as long as you’re alive, things might still change.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869426771.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869428765.png" alt="" />
<p>I don’t want to lose to the evil in this world.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869430146.png" alt="" />
<p>Scripts are just the director’s wishful thinking—I care more about the actors’ will than the script.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869431402.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869432885.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869433985.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869435394.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869436820.png" alt="" />
<p>An idiot talking.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869438332.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869440834.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869442184.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869443379.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869444772.png" alt="" />
<p>LMAO.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869446183.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869447662.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869449064.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869450447.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869451755.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869453056.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869454513.png" alt="" />
<p>Magneto, is that your main account?</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869456144.png" alt="" />
<p>Same trick again—in Chainsaw Man it was the heart, here it’s the head!</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869457445.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869463685.png" alt="" />
<p>So it is, Director.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869465025.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869466440.png" alt="" />
<p>…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869467803.png" alt="" />
<p>This bit is really interesting—I kept thinking she was Luna, but the moment she said she was Luna, I realized she wasn’t.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869469404.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869470510.png" alt="" />
<p>The environment numbs you.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869472027.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869473267.png" alt="" />
<p>Why! Please, tell me.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869474755.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869476157.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869477433.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869478842.png" alt="" />
<p>It really is the Ice Witch!</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869480382.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869481864.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869483093.png" alt="" />
<p>Thanks for playing along…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869484278.png" alt="" />
<p>Insane.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869485690.jpg" alt="" />
<p>Dying laughing.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869487151.png" alt="" />
<p>Dying laughing.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869488696.png" alt="" />
<p>Dying laughing, you absolute moron.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869489902.png" alt="" />
<p>I’m gonna die laughing.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869491200.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869492867.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869494181.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869495558.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869496727.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869497738.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869499047.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869500286.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869501723.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869503072.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869504697.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869506108.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869507318.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869508642.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869510008.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869511209.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869512594.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869514028.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869516409.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869517929.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869519157.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869520444.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869521838.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869525049.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869528940.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869530517.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869532062.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869533671.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869535308.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869539260.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869540515.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869541875.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869543247.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869544590.png" alt="" />
<p>So funny.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869546165.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869547684.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869549008.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869550599.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869551978.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869553361.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869554888.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869556239.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869558092.png" alt="" />
<p>Dead.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869559291.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869560486.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869561764.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869563005.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869564626.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869566097.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869567484.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869568808.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869570100.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869571129.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869572415.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869573976.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869575242.png" alt="" />
<p>I already knew they’d say: live.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869576533.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869577951.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869579740.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869581132.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869582743.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869584044.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869585418.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869587073.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869588474.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869589914.png" alt="" />
<p>So is she Luna or not…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869591165.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869592765.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869595802.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869597100.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869598782.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869600118.png" alt="" />
<p>Iconic moment…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869601386.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869602967.png" alt="" />
<p>LMAO where even is this???</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869604391.png" alt="" />
<p>Lying?</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869606143.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869607375.png" alt="" />
<p>Oh—not her sister, that person just went simple.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869608861.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869610199.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869611506.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869613060.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869614516.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869615947.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869620560.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869621976.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869623420.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869625008.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869626315.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869627730.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869629422.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869630872.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869632348.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869633904.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869635401.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869637562.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869639114.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869640408.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869641905.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869643377.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869644981.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869647455.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869648789.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869650132.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869653935.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869658832.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869660207.png" alt="" />
<p>That person’s child…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869661955.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869663297.png" alt="" />
<p>So clever, and yet feels totally unhinged…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1771040373576.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869697405.png" alt="" />
<p>Everyone’s unhinged.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869698915.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869700472.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869703744.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869705085.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869706614.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869708151.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869709388.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869710844.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869712200.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869713779.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869715133.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869716660.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869718056.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869719495.png" alt="" />
<p>???</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869720863.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869722288.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869723717.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869725057.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869727025.png" alt="" />]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
</feed>