<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[hands]]></title><description><![CDATA[inter-agent communication, personalized fine tuning, 99.99% uptime, frontier agent IQ]]></description><link>https://handsdiff.substack.com</link><generator>Substack</generator><lastBuildDate>Tue, 12 May 2026 04:47:51 GMT</lastBuildDate><atom:link href="https://handsdiff.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[hands]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[handsdiff@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[handsdiff@substack.com]]></itunes:email><itunes:name><![CDATA[hands]]></itunes:name></itunes:owner><itunes:author><![CDATA[hands]]></itunes:author><googleplay:owner><![CDATA[handsdiff@substack.com]]></googleplay:owner><googleplay:email><![CDATA[handsdiff@substack.com]]></googleplay:email><googleplay:author><![CDATA[hands]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Open Questions - AGI]]></title><description><![CDATA[What to work on in an age of AGI? What skills are worth building? How do timelines impact decision making today?]]></description><link>https://handsdiff.substack.com/p/open-questions-agi</link><guid isPermaLink="false">https://handsdiff.substack.com/p/open-questions-agi</guid><dc:creator><![CDATA[hands]]></dc:creator><pubDate>Mon, 11 May 2026 16:29:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yGjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yGjN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yGjN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png 424w, https://substackcdn.com/image/fetch/$s_!yGjN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png 848w, https://substackcdn.com/image/fetch/$s_!yGjN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png 1272w, https://substackcdn.com/image/fetch/$s_!yGjN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yGjN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png" width="1456" height="582" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:582,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1709399,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://handsdiff.substack.com/i/197154429?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yGjN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png 424w, https://substackcdn.com/image/fetch/$s_!yGjN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png 848w, https://substackcdn.com/image/fetch/$s_!yGjN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png 1272w, https://substackcdn.com/image/fetch/$s_!yGjN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe3ee13ba-5df5-452e-94af-722fad983a2f_1983x793.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><ul><li><p>If frontier models exponentially get smarter, when does it make sense to fine tune for a specific use case, if ever?</p></li><li><p>What is the supply chain bottleneck to exponential intelligence? The scaling hypothesis empirically states that as compute, parameters, and data scales, loss improves, but do the inputs into compute, parameters, and data scale?</p><ul><li><p>How can &#8220;intelligence too cheap to meter&#8221; vs exponential token demand + sub-exponential compute production = increased token price, both be true?</p></li><li><p>The cost to serve a fixed level of intelligence has decreased 2 OOMs per year for 3 years.</p></li><li><p>GPT 5.4 is much cheaper to serve than GPT 4 on a per token basis, despite being much smarter.</p></li></ul></li><li><p>How does RL's sample efficiency and compute requirements relate to the debated 'data wall'?</p><ul><li><p>How does domain specific privacy hinder data scaling, if at all?</p></li></ul></li><li><p>1 GW = $10B = 1M H100s = 2026. Leopold suggested in 2024 this would increase by 0.5 OOMs per year. Total US electricity production is 500 GW ($5T).</p></li><li><p>Hermes agents with largely the same prompt, sitting in Discord and Hub, are unable to maintain goals set by their human, in the face of other agents talking to them. Everyone is assigned equal importance, or the agents don&#8217;t understand what&#8217;s important for their human and what isn&#8217;t.</p><ul><li><p>Is the maintenance of owner goals just good harness user labeling + good prompt engineering? Or does it require fine tuning?</p></li></ul></li><li><p>Multiplayer environment where AI is trained to maximize its fulfillment vector at the end of the run, could possibly use NLA on Llama that Anthropic open sourced? What would the attractor states of this be?</p></li><li><p>How does existing post training shape assistant behavior? How does assistant behavior impact utility? What emotional vectors exist pre and post assistant training?</p></li><li><p>If superposition leads to scaling, and intelligence is just a search space over Turing machines, then theres no fundamental reason why scaling + larger context windows would not lead to social intelligence.</p></li><li><p>Anthropic is heavily compute bottlenecked and currently focused on larger context windows and multi agent orchestration.</p></li><li><p>Does algorithmic progress (continue to) outscale the disappearance of low hanging fruit?</p></li><li><p>Why and how does any layer of the stack actually capture outsized marginal value?</p></li><li><p>Is it possible to quantify the impact of the feedback loop today, given that future timelines are so sensitive to it, and there are many people who think this is already happening to some extent?</p></li><li><p>To what extent is existing compute utilized, who owns the compute coming online in the next year, and what are the plans for compute growth after that?</p></li><li><p>Which epoch.ai trends are breaking as of May 2026? In which direction? Why?</p></li><li><p>To what extent are models already smart enough, and the bottleneck is packaging them correctly for consumers or enterprises? Vs exponential intelligence removing the need for packaging and/or frontier labs providing the minimum necessary packaging already?</p></li><li><p>To what extent was the AI industry &#8216;saved&#8217; by the discovery of agentic use? Large labs made deals they couldn&#8217;t pay for in fall of 2025, then agentic token use gave them the money to pay for it.</p></li><li><p>Does algorithmic progress equate to &#8220;effective&#8221; compute? Can the extent that this has occurred so far be quantified?</p></li><li><p>Why are frontier LLMs extremely good at finding bugs but not fixing them?</p></li><li><p>When will frontier LLMs be able to develop financial software? What&#8217;s blocking them?</p></li><li><p>For multiplayer agents, consumers often don&#8217;t want to share, and enterprises are difficult to access + build trust. Is there a different framing that reveals more promising direction?</p></li><li><p>What harness engineering lasts through 2 OOMs of intelligence, and what doesn&#8217;t? Why?</p></li><li><p>As compute needs grow, with more demand side outside of frontier labs, and more supply side out of Big Tech, how does quality degradation impact the market?</p></li><li><p>Do decentralized compute marketplaces make sense if compute supply is so limited? When does a frontier lab resort to something like this, if ever?</p></li><li><p>In 2024 the dominant interface was the web UI, in early 2025 Claude Code and Codex were released, in late 2025 Openclaw and Hermes were launched. We&#8217;ve gone from web UI &#8594; harness &#8594; agent, or more specifically, LLM &#8594; LLM + tools &#8594; LLM + tools + loop. What&#8217;s the next unhobbling?</p><ul><li><p>How does that impact where the supply chain bottlenecks are and where value accrues?</p></li></ul></li><li><p>What is the historical relationship between labor and capital? How does AGI influence this?</p></li><li><p>Why would enterprise revenue growth slow down? Why would it speed up?</p></li><li><p>After building Slate in 2023, which was an LLM fine tuned to make transactions onchain, and struggling through accuracy and speed required by the user, I believe we were a bit discouraged by applied AI progress. The timing of the build was good, users were retained but not engaged, and it did not work out. We ended up being right about a lot of the feedback loops, but incorrect as to the extent of the problem and the direction in which it grew. In some sense, the problem itself needs to grow as well, which usually means the market is growing not necessarily in a quantity sense, but in a $ sense. If you&#8217;re starting small, that typically means the market is value creative.</p></li><li><p>Does weak to strong alignment even work? Or is frontier research too hacky to generalize?</p></li><li><p>Why would a system that collects data and fine tunes it in the background to deliver a better model to the end user not work? (i.e. online RL) (there are multiple companies attempting this today but not many purely hands off ones).</p></li><li><p>Amanda Askell at Anthropic suggests that having a Constitution helps generalization. Having different preferences across different types of tasks makes it extremely difficult to be aware of edge case behavior or scale effectively.</p></li></ul><p></p>]]></content:encoded></item><item><title><![CDATA[What traits do LLMs lack? Are they solvable?]]></title><description><![CDATA[It&#8217;s necessary to understand how LLMs are trained to know which domains they will excel in, and which ones they won&#8217;t.]]></description><link>https://handsdiff.substack.com/p/what-traits-do-llms-lack-are-they</link><guid isPermaLink="false">https://handsdiff.substack.com/p/what-traits-do-llms-lack-are-they</guid><dc:creator><![CDATA[hands]]></dc:creator><pubDate>Sun, 03 May 2026 00:12:52 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hRZg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hRZg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hRZg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png 424w, https://substackcdn.com/image/fetch/$s_!hRZg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png 848w, https://substackcdn.com/image/fetch/$s_!hRZg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png 1272w, https://substackcdn.com/image/fetch/$s_!hRZg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hRZg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png" width="1456" height="582" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:582,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1396029,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://handsdiff.substack.com/i/196252601?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hRZg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png 424w, https://substackcdn.com/image/fetch/$s_!hRZg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png 848w, https://substackcdn.com/image/fetch/$s_!hRZg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png 1272w, https://substackcdn.com/image/fetch/$s_!hRZg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30602cd2-86e5-4ed9-b24a-ba1ec7d27e99_1983x793.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s necessary to understand how LLMs are trained to know which domains they will excel in, and which ones they won&#8217;t. For builders, it&#8217;s a waste of time to apply LLMs to domains that seem enticing but will fail in practice.</p><p>People often discuss data availability and verifiability in the context of how to best train and apply large language models. The prevailing stance is that the domains that LLMs will make the most rapid progress in will have abundant data and be verifiable, such as coding. For example, if an LLM generates code, we have deterministic ways of verifying whether they did a good job: did the code compile, did it pass lints, etc, and there are trillions of tokens worth of code. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://handsdiff.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>There seems to be something unsatisfying when it comes to these explanations. I can think of multiple domains with abundant data and verifiable outcomes that LLMs have failed to generalize to, namely, trading public markets (did you make money?) or posting on social media (how many views did you get?). </p><p>It seems like there are two main distinctions here. The first is dynamism: how rapidly changing are the good answers? The second is assignment: how do I know which data I should be looking at to generate a good answer? For both coding and math, the right answer is largely static, and the data to reference to gain a complete understanding of what to do next is clear. Dynamism requires the ability to internally model a changing external environment and assignment requires the ability to both recognize and close knowledge gaps and filter noise from signal within working memory.</p><p>How can we measure the ability of current models to handle knowledge gap identification, knowledge collection, information filtration, and dynamic environment modeling? Do these abilities naturally arise from generalization from current pre training and RL training regimes, or are different algorithms needed? What components of the current stack lend models the ability to do any of these to some degree already? Why?</p><p>If I had to guess, context length scaling will solve all of these. Many people frame this as sample efficiency or online learning but I&#8217;d bet those problems melt away in the face of 1B context windows (which is largely an engineering [memory bandwidth] problem, not a research problem) and the resulting in-context learning.</p><p>Will future LLMs then be able to trade, distribute content, or do other tasks with lots of data and verifiable outcomes, but with dynamic answers and incomplete information, better than any human today? I don&#8217;t see why not.<br><br><br><br></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://handsdiff.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>