<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Uncategorized &#8211; Plainlli</title>
	<atom:link href="https://plainlii.com/es/category/uncategorized/feed/" rel="self" type="application/rss+xml" />
	<link>https://plainlii.com/es</link>
	<description>Plainlli</description>
	<lastbuilddate>Mon, 30 Mar 2026 21:19:12 +0000</lastbuilddate>
	<language>es</language>
	<sy:updateperiod>
	hourly	</sy:updateperiod>
	<sy:updatefrequency>
	1	</sy:updatefrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Plain Language Is the Foundation Your AI Implementation Is Missing</title>
		<link>https://plainlii.com/es/2026/03/30/plain-language-is-the-foundation-your-ai-implementation-is-missing/</link>
		
		<dc:creator><![CDATA[newemage]]></dc:creator>
		<pubdate>Mon, 30 Mar 2026 21:11:26 +0000</pubdate>
				<category><![CDATA[Uncategorized]]></category>
		<guid ispermalink="false">https://plainlii.com/?p=2568</guid>

					<description><![CDATA[Plain Language Is the Foundation Your AI Implementation Is Missing When local governments talk about AI implementation, the conversation usually starts in the same place: tools, vendors, pilots, and budgets. What it rarely starts with is language. That&#8217;s a problem. Because before AI can improve civic services, it needs something to work with — and [&#8230;]]]></description>
										<content:encoded><![CDATA[<h1>Plain Language Is the Foundation Your AI Implementation Is Missing</h1>
<p>When local governments talk about AI implementation, the conversation usually starts in the same place: tools, vendors, pilots, and budgets. What it rarely starts with is language.</p>
<p>That&#8217;s a problem. Because before AI can improve civic services, it needs something to work with — and in most local governments, that something is a sprawling, inconsistent body of documents, policies, forms, and workflows written in language that&#8217;s unclear even to the humans who use it every day.</p>
<p>Plain language isn&#8217;t a communications nicety. For local governments investing in AI, it&#8217;s the foundation everything else depends on.</p>
<h2>What AI Actually Does With Your Content</h2>
<p>AI tools — whether they&#8217;re summarizing reports, drafting resident communications, answering service questions, or routing requests — are only as good as the content they&#8217;re trained on or working with.</p>
<p>Feed an AI system vague policy language, inconsistently formatted procedures, or jargon-heavy forms, and you get vague, inconsistent, jargon-heavy outputs. Garbage in, garbage out is a cliché because it&#8217;s true — and in local government, where the stakes include resident trust, legal compliance, and equitable service delivery, bad outputs aren&#8217;t just inconvenient. They&#8217;re costly.</p>
<p>Plain language implementation solves this at the source. When your documents are clear, consistent, and structured for comprehension, AI tools have something solid to work with. The outputs improve. The errors decrease. The staff time spent correcting AI-generated content drops significantly.</p>
<p><img fetchpriority="high" decoding="async" class="wp-image-2572 aligncenter" src="https://plainlii.com/wp-content/uploads/2026/03/pl-tranform-1-300x225.png" alt="aclara robot confused with gobbledygook and happy with clear content" width="332" height="249" srcset="https://plainlii.com/wp-content/uploads/2026/03/pl-tranform-1-300x225.png 300w, https://plainlii.com/wp-content/uploads/2026/03/pl-tranform-1-1024x768.png 1024w, https://plainlii.com/wp-content/uploads/2026/03/pl-tranform-1-768x576.png 768w, https://plainlii.com/wp-content/uploads/2026/03/pl-tranform-1-1536x1152.png 1536w, https://plainlii.com/wp-content/uploads/2026/03/pl-tranform-1-2048x1536.png 2048w, https://plainlii.com/wp-content/uploads/2026/03/pl-tranform-1-16x12.png 16w" sizes="(max-width: 332px) 100vw, 332px" /></p>
<h2> The Communication Gap Nobody Is Talking About</h2>
<p>Most AI readiness frameworks focus on technical infrastructure: data systems, security protocols, integration capacity. These matter. But they don&#8217;t address the communication layer — the actual language your organization uses to document processes, instruct staff, inform residents, and record decisions.</p>
<p>In most local governments, that communication layer has never been audited. Policies written a decade ago sit alongside newer procedures in different formats, different reading levels, and different terminology for the same concepts. Nobody planned it that way. It just accumulated.</p>
<p>When AI enters that environment, it doesn&#8217;t fix the inconsistency. It inherits it — and then scales it.</p>
<p>Plain language audits surface these gaps before they become AI problems. They identify where terminology is inconsistent, where instructions are ambiguous, where documents assume knowledge that staff or residents may not have. That work, done before AI implementation, dramatically reduces the risk of AI implementation going wrong.</p>
<h2> Plain Language and AI Readiness Are the Same Work</h2>
<p>Here&#8217;s what local government leaders often don&#8217;t realize until they&#8217;re deep into an AI project: the work of plain language and the work of AI readiness overlap almost entirely.</p>
<p>Both require you to inventory and assess your existing content. Both require clear, consistent terminology across departments. Both require documented workflows — not just the ones that live in people&#8217;s heads. Both require staff who understand what good communication looks like and why it matters.</p>
<p>Organizations that have invested in plain language are, almost by definition, better prepared for AI adoption. Their content is cleaner. Their processes are documented. Their staff are trained to think about how information is structured and received.</p>
<p>Organizations that haven&#8217;t done that work will do it eventually — either proactively, before AI implementation, or reactively, after AI implementation surfaces every gap at scale.</p>
<h2>What This Looks Like in Practice</h2>
<p>Consider resident-facing services: permit applications, benefits enrollment, complaint submission. These are high-volume, high-stakes interactions where AI tools are increasingly being deployed to streamline processing and improve response times.</p>
<p>If the underlying forms and instructions are written in complex bureaucratic language, AI doesn&#8217;t simplify them — it processes them as-is and returns outputs residents still can&#8217;t understand. The bottleneck moves but doesn&#8217;t disappear.</p>
<p>Now consider the same services after a plain language review. Forms use common words. Instructions are step-by-step. Terminology is consistent across channels. When AI enters that environment, it has clear inputs to work with and produces clear outputs. Staff spend less time fielding confused calls. Residents complete transactions successfully on the first attempt. The efficiency gains AI promised actually materialize.</p>
<p>Plain language isn&#8217;t preparation for AI. It&#8217;s what makes AI work.</p>
<h2>Where ACLARA Comes In</h2>
<p>ACLARA is an AI readiness scoring platform built specifically for local government — and plain language readiness is central to how it works.</p>
<p>An ACLARA audit doesn&#8217;t just assess your technical infrastructure. It evaluates the communication layer: the quality and consistency of your content, the clarity of your documented workflows, the capacity of your staff to communicate effectively in an AI-assisted environment.</p>
<p>The result is a concrete readiness score with a prioritized roadmap — so your leadership team knows exactly where to focus before committing to new tools or new vendors.</p>
<p>If your city or county is making AI decisions right now, the most useful thing you can do is understand where you actually stand. Not where you hope you stand. Not where a vendor&#8217;s demo suggested you stand. Where you actually stand.</p>
<p>That&#8217;s what ACLARA is built to tell you.</p>
<p>Request early access at aclara.ai— and visit plainlii.com to learn how Plain Language International helps local governments build the communication foundation that makes everything else work.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Plain Language is Needed When Big Financial News Sounds Like a Foreign Language</title>
		<link>https://plainlii.com/es/2026/03/04/crypto-financial-plain-language/</link>
		
		<dc:creator><![CDATA[newemage]]></dc:creator>
		<pubdate>Wed, 04 Mar 2026 23:19:38 +0000</pubdate>
				<category><![CDATA[Uncategorized]]></category>
		<guid ispermalink="false">https://plainlii.com/?p=2507</guid>

					<description><![CDATA[When Big Financial News Sounds Like a Foreign Language This week, a major cryptocurrency exchange announced it had received a Federal Reserve master account — the first digital asset company in U.S. history to get one. The announcement was full of phrases like &#8220;sovereign financial rails,&#8221; &#8220;atomic settlement,&#8221; and &#8220;full-reserve model.&#8221; If your eyes glazed [&#8230;]]]></description>
										<content:encoded><![CDATA[<h1 class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>When Big Financial News Sounds Like a Foreign Language</strong></h1>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">This week, a major <a href="https://blog.kraken.com/news/federal-reserve-master-account" target="_blank" rel="noopener">cryptocurrency exchange announced</a> it had received <strong>a Federal Reserve master account</strong> — the first digital asset company in U.S. history to get one.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The announcement was full of phrases like &#8220;sovereign financial rails,&#8221; &#8220;atomic settlement,&#8221; and &#8220;full-reserve model.&#8221; If your eyes glazed over, you&#8217;re not alone.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Here&#8217;s what it actually means: a crypto company can now move money directly through the U.S. government&#8217;s payment system — the same infrastructure banks use — without going through a middleman bank. That&#8217;s a big deal, because those middlemen have been a weak link for crypto companies for years.</p>
<h2>Inside the Crypto Kraken Case</h2>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">A Crypto firm gaining direct access to transacting with the Federal Reserve is, to say the least, a significant development. Here&#8217;s what it means across a few dimensions:</p>
<h3 class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>For Kraken specifically</strong></h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The practical upshot is that Kraken can now move dollars directly on Fedwire — the backbone of large-value U.S. payments — without routing through a correspondent bank like JPMorgan or Silvergate (which collapsed in 2023). That removes a key dependency and a layer of cost and counterparty risk. It also means they&#8217;re no longer subject to a correspondent bank deciding to terminate their relationship, which has been an existential threat for crypto firms in the past.</p>
<h3 class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>For the crypto industry broadly</strong></h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">This is the most important implication. For years, one of crypto&#8217;s structural vulnerabilities has been its fragile connection to the traditional banking system — crypto firms depended on a small number of willing banks, and when those banks failed or exited (Silvergate, Signature), the whole sector felt it. A direct Fed master account changes that calculus. If this model becomes replicable, crypto infrastructure firms could become <em>first-class participants</em> in the U.S. payment system rather than tolerated guests.</p>
<h3 class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>For regulators and policy</strong></h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The Fed has historically been extremely reluctant to grant master accounts to non-traditional institutions; there&#8217;s ongoing litigation from other crypto banks (Custodia Bank fought a similar application for years and lost). So, the fact that Kraken got one signals a substantial shift in regulatory posture, likely reflecting the broader policy environment shift toward crypto under the current administration. It may open the door for other Wyoming SPDIs, but the Fed will probably move slowly.</p>
<h3 class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>The &#8220;atomic settlement&#8221; vision and w</strong><strong>hat to watch for</strong></h3>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The most forward-looking implication in the announcement is the mention of <em>atomic settlement</em> (dollars and the crypto swap hands at exactly the same moment so no one has the risk of leaving empty-handed). Right now, even in regulated markets, there&#8217;s a timing gap between the crypto and the dollar side of the trade. If Kraken can eventually connect on-chain settlement with  direct Fedwire access, that closes the loop in a way that&#8217;s genuinely new. (The settlement means completing the blockchain transaction&#8211;remember that blockchain is the underlying technology, akin to a digital ledger, that creates and records transactions across many computers, making it impossible to change past entries because each new &#8220;block&#8221; of data is securely chained to the last one.) That&#8217;s still aspirational, but it&#8217;s architecturally plausible now in a way it wasn&#8217;t before.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The key caveats are that this starts with a narrow, phased rollout for institutional clients, and the Fed will watch closely before allowing expansion. The real test is whether this remains a one-off or becomes a template — and whether Custodia and similar institutions can now point to this approval to reopen their own cases.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">But the reality is that it&#8217;s no longer hyperbole to call this structural. It legitimizes a new kind of financial institution that sits at the intersection of crypto custody and sovereign payment rails, and it reduces the sector&#8217;s chronic vulnerability to banking access being cut off.</p>
<figure id="attachment_2509" aria-describedby="caption-attachment-2509" style="width: 300px" class="wp-caption aligncenter"><img decoding="async" class="wp-image-2509 size-medium" src="https://plainlii.com/wp-content/uploads/2026/03/dogfooding-300x194.png" alt="Illustration of green dollar bills, blue cryptocurrency coins, and an orange dog sitting side by side questining who is &quot;dogfooding&quot; or using a product internally before release." width="300" height="194" srcset="https://plainlii.com/wp-content/uploads/2026/03/dogfooding-300x194.png 300w, https://plainlii.com/wp-content/uploads/2026/03/dogfooding-1024x663.png 1024w, https://plainlii.com/wp-content/uploads/2026/03/dogfooding-768x497.png 768w, https://plainlii.com/wp-content/uploads/2026/03/dogfooding-1536x994.png 1536w, https://plainlii.com/wp-content/uploads/2026/03/dogfooding-18x12.png 18w, https://plainlii.com/wp-content/uploads/2026/03/dogfooding.png 2000w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption id="caption-attachment-2509" class="wp-caption-text">Who is dogfooding what? Is the Fed testing crypto value as a tool or crypto testing the Fed as a launching pad?</figcaption></figure>
<h2 class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>Why does the language matter?</strong></h2>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Financial institutions often communicate in ways that are technically accurate but practically useless to most readers. The people most affected — customers, investors, policymakers, the general public — are left guessing.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Plain language isn&#8217;t about dumbing things down. It&#8217;s about respecting your reader&#8217;s time and making sure your message actually lands. Complex ideas <em>can</em> be explained clearly. &#8220;Direct access to Federal Reserve payment infrastructure&#8221; can become &#8220;they can now move money through the government&#8217;s banking system without a middleman.&#8221; Same fact. Completely different level of understanding.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]"><strong>The stakes are real. </strong>When financial news is hard to understand, people make worse decisions. They misread risk. They tune out policy conversations that directly affect them. They trust institutions less — often for good reason.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">The crypto industry is at a turning point. It&#8217;s moving from the fringes of finance toward the center. That transition will go more smoothly — for companies, regulators, and the public — if the communication keeps pace with the complexity.</p>
<p class="font-claude-response-body break-words whitespace-normal leading-[1.7]">Clear writing isn&#8217;t a nice-to-have. It&#8217;s infrastructure too!</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI Rewrites the Language Industry — And Most Companies Aren’t Ready</title>
		<link>https://plainlii.com/es/2026/02/24/ai-rewrites-language-industry/</link>
		
		<dc:creator><![CDATA[romina@plainlii.com]]></dc:creator>
		<pubdate>Tue, 24 Feb 2026 08:50:59 +0000</pubdate>
				<category><![CDATA[Uncategorized]]></category>
		<guid ispermalink="false">https://plainlii.com/?p=2498</guid>

					<description><![CDATA[AI Rewrites the Language Industry — And Most Companies Aren’t Ready The Language Industry Is Not Evolving — It’s Being Rewritten For decades, the language industry has described itself as “resilient,” “people-driven,” and “quality-focused.” Those words describe strengths. But today, they are insufficient. What is happening now is not gradual modernization. It is identity disruption. [&#8230;]]]></description>
										<content:encoded><![CDATA[<h1><strong>AI Rewrites the Language Industry — And Most Companies Aren’t Ready</strong></h1>
<h2><strong>The Language Industry Is Not Evolving — It’s Being Rewritten</strong></h2>
<p>For decades, the language industry has described itself as “resilient,” “people-driven,” and “quality-focused.” Those words describe strengths. But today, they are insufficient.</p>
<p>What is happening now is not gradual modernization. It is identity disruption. The language industry is not simply adopting Artificial Intelligence (AI) — it is being redefined by it.</p>
<p>And many players are still acting as if this is just another technology upgrade. It is not.</p>
<h2><strong>The Comfort Illusion</strong></h2>
<p>For years, the industry operated within a predictable model:</p>
<ul>
<li>Per-word pricing</li>
<li>Human translation as the primary production engine</li>
<li>Agencies managing distributed freelancers and adding value in orchestration</li>
<li>Technology as productivity support</li>
</ul>
<p>Margins were tight but stable. Demand was steady. Growth meant more linguists.  Then the content explosion happened. Software ate the world. Global expansion accelerated. And AI reached linguistic competence at scale.</p>
<p>The old operating model cannot support the new content economy. Yet parts of the industry are clinging to incremental adjustments — adding machine translation as a line item, relabeling post-editing services, or rebranding as “AI-powered” without fundamentally restructuring their workflows.</p>
<p><a href="https://plainlii.com/wp-content/uploads/2026/02/ai-in-lang.svg"><img decoding="async" class="size-medium wp-image-2500 aligncenter" src="https://plainlii.com/wp-content/uploads/2026/02/ai-in-lang-hi-300x194.png" alt="Illustration of an open book transforming into binary code and digital circuitry" width="300" height="194" srcset="https://plainlii.com/wp-content/uploads/2026/02/ai-in-lang-hi-300x194.png 300w, https://plainlii.com/wp-content/uploads/2026/02/ai-in-lang-hi-1024x663.png 1024w, https://plainlii.com/wp-content/uploads/2026/02/ai-in-lang-hi-768x497.png 768w, https://plainlii.com/wp-content/uploads/2026/02/ai-in-lang-hi-1536x994.png 1536w, https://plainlii.com/wp-content/uploads/2026/02/ai-in-lang-hi-2048x1325.png 2048w, https://plainlii.com/wp-content/uploads/2026/02/ai-in-lang-hi-18x12.png 18w" sizes="(max-width: 300px) 100vw, 300px" /></a></p>
<h2><strong>AI Is Not a Tool. It Is Infrastructure.</strong></h2>
<p>Neural machine translation was the first signal. Large language models are the second wave — and far more disruptive. AI can now generates multilingual content “natively”—yes, it is still predicting next token. But.. with million-token context windows, AI models can process vast amounts of information in a single prompt. This is equivalent to approximately 750,000 words, thousands of files, or 90 minutes of video. Sure, it is not a lifetime of experiences, but this helps AI now:</p>
<ul>
<li>Adapt tone and brand voice across markets</li>
<li>Perform terminology enforcement at scale</li>
<li>Operate in real-time within software systems</li>
</ul>
<p>This shifts language from a downstream service to an embedded system capability. Companies are no longer asking, “How do we translate this content?” They are asking, “How do we design content to scale globally from day one?” That question changes everything.</p>
<h2><strong>The Human Role Is Changing — Not Disappearing</strong></h2>
<p>&nbsp;</p>
<h3><strong>The End of Per-Word Thinking</strong></h3>
<p>Per-word pricing made sense in a human-only production model. It becomes misaligned in an AI-augmented one. When marginal production cost approaches zero for first-pass output, value moves elsewhere:</p>
<ul>
<li style="font-size: 16px;">Risk management (this is HUGE!)</li>
<li>Domain expertise</li>
<li>Brand protection</li>
<li>Compliance assurance</li>
<li>Data governance</li>
<li>Accessibility</li>
<li>Engagement</li>
</ul>
<p>The industry must confront a difficult truth: translation as a commodity is collapsing. What remains valuable is judgment.</p>
<h3><strong>The Rise of Language Operations (LangOps)</strong></h3>
<p>The companies that will lead the next decade are not “translation providers.” They are architects of language infrastructure.</p>
<p>Language Operations means:</p>
<ul>
<li>API-driven pipelines integrated into product development</li>
<li>Continuous localization within CI/CD environments</li>
<li>AI-assisted generation and adaptation</li>
<li>Real-time analytics and performance tracking</li>
<li>Human oversight applied strategically, not universally</li>
</ul>
<p>Organizations that understand this shift from service to architecture layer are building scalable multilingual ecosystems. Those who don’t are optimizing workflows that may soon be obsolete.</p>
<h3><strong>The Human Role</strong></h3>
<p>The narrative of “AI replacing translators” is simplistic. What is happening is more nuanced — and more demanding. The linguist of the future is:</p>
<ul>
<li>A domain specialist</li>
<li>A cultural strategist</li>
<li>An AI output evaluator</li>
<li>A quality architect</li>
</ul>
<p>Routine translation will be automated. High-context judgment will not.  The uncomfortable reality is that generalist production work will shrink. Specialized expertise will grow in value. This bifurcation is already underway.</p>
<p><img loading="lazy" decoding="async" class="alignnone  wp-image-2503 aligncenter" src="https://plainlii.com/wp-content/uploads/2026/02/human-touch-300x194.png" alt="ue silhouette of a woman touching digital code as it flows from an open book.”" width="337" height="218" srcset="https://plainlii.com/wp-content/uploads/2026/02/human-touch-300x194.png 300w, https://plainlii.com/wp-content/uploads/2026/02/human-touch-1024x663.png 1024w, https://plainlii.com/wp-content/uploads/2026/02/human-touch-768x497.png 768w, https://plainlii.com/wp-content/uploads/2026/02/human-touch-1536x994.png 1536w, https://plainlii.com/wp-content/uploads/2026/02/human-touch-2048x1325.png 2048w, https://plainlii.com/wp-content/uploads/2026/02/human-touch-18x12.png 18w" sizes="(max-width: 337px) 100vw, 337px" /></p>
<h2><strong>The Strategic Shift: Language as Growth Engine</strong></h2>
<p>Forward-thinking enterprises treat localization as market acceleration infrastructure. Multilingual capability now directly influences:</p>
<ul>
<li>Revenue expansion</li>
<li>Customer acquisition</li>
<li>Product adoption</li>
<li>Brand perception</li>
</ul>
<p>In global digital markets, language agility equals competitive advantage. Speed matters. Scalability matters. Consistency matters. Risk management matters.</p>
<p>The providers who enable those outcomes — not just translated words — will define the next era.</p>
<h3><strong>The Industry’s Choice</strong></h3>
<p>The language industry stands at a crossroads:</p>
<ol>
<li>Protect legacy structures and compete on shrinking margins</li>
<li>Redesign itself around AI-native operations</li>
</ol>
<p>One path leads to commoditization. The other leads to strategic relevance.</p>
<p>This transformation is not theoretical. It is already visible in procurement behavior, startup innovation, enterprise localization strategies, and venture investment patterns.</p>
<p>The real risk is not disruption but underestimating how deep this shift goes. Because this is not about better translation technology. It is about a new operating model for global communication.</p>
<p>And operating models, once broken, do not quietly return.</p>
<h2><strong>The Missing Conversation: Plain Language as Power</strong></h2>
<p>As the industry races toward AI-native workflows and scalable multilingual systems, one principle risks being overlooked: clarity.</p>
<p>Plain language is not cosmetic. It is structural.</p>
<p>In civic contexts, language determines access. Policies, voting materials, public health guidance, and legal notices are only as effective as they are understandable. When language is opaque, participation declines. When it is clear, engagement rises.</p>
<p>In consumer markets, complexity erodes trust. Contracts buried in jargon, unclear return policies, ambiguous terms of service — these are not neutral communication choices. They shape power dynamics between institutions and individuals.</p>
<p>AI now gives organizations the ability to produce content at unprecedented scale. But scale amplifies whatever philosophy guides it. If complexity is automated, confusion scales. If clarity is engineered, trust scales.</p>
<p>Plain language, therefore, becomes a strategic decision.</p>
<p>For enterprises, it improves customer satisfaction, reduces support volume, strengthens brand credibility, and lowers legal risk. For governments and regulated industries, it reinforces transparency and democratic participation.</p>
<p>In a world where machines can generate infinite words, the competitive advantage may belong to those who choose fewer — and clearer — ones.</p>
<p>The future of the language industry is not just multilingual.</p>
<p>It must also be intelligible.</p>
<p>&nbsp;</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Validation and Verification in Quality Evaluation: 7 Tips for Stronger Results</title>
		<link>https://plainlii.com/es/2026/01/22/validation-and-verification-for-quality/</link>
		
		<dc:creator><![CDATA[newemage]]></dc:creator>
		<pubdate>Thu, 22 Jan 2026 20:17:11 +0000</pubdate>
				<category><![CDATA[Uncategorized]]></category>
		<guid ispermalink="false">https://plainlii.com/?p=2487</guid>

					<description><![CDATA[TQE Systems, Validation and Verification When do Translation Quality Scores Signal “Fitness for Purpose”? Translation quality scores are supposed to predict success. But what happens when translations pass every quality check—and still fail in the market? The gap between measured quality and actual fitness reveals a fundamental flaw in how most organizations validate their translation [&#8230;]]]></description>
										<content:encoded><![CDATA[<h1>TQE Systems, Validation and Verification</h1>
<h2>When do Translation Quality Scores Signal “Fitness for Purpose”?</h2>
<p>Translation quality scores are supposed to predict success. But what happens when translations pass every quality check—and still fail in the market? The gap between measured quality and actual fitness reveals a fundamental flaw in how most organizations validate their translation workflows.</p>
<p>The stakes for translation quality validation have escalated significantly. Regulatory frameworks in medical devices and pharmaceuticals now require documented evidence that translations enable safe use—not just linguistic accuracy. AI and machine translation have compressed production timelines while introducing new uncertainty about output quality. Global brands face amplified reputational risk in markets where a single mistranslation can trigger viral social media backlash. Meanwhile, distributed teams and external vendors make specification alignment harder to maintain. Organizations can no longer afford to discover validation failures only after deployment, when correction costs multiply exponentially.</p>
<p>This article argues for a clear conceptual separation between verification and validation, and for applying that separation consistently at two different logical levels: the translation product and the translation quality evaluation (TQE) system that evaluates it. Without this separation, procedural compliance can be easily mistaken for effectiveness, and confidence in quality decisions is weakened.</p>
<h1>Verification and Validation: The Core Distinction</h1>
<p>In essence, <strong>verification</strong> is about checking compliance with <em>specifications</em> (stated and operationalized requirements) and <strong>validation</strong> is about checking fulfillment of actual <em>requirements</em> (stakeholder needs and expectations). Verification operates within the requirements space, while validation requires contextual evidence.</p>
<p>The distinction between meeting <em>specifications</em> and meeting <em>requirements</em> is well established in ISO quality management standards, yet its implications for translation quality evaluation are often underappreciated. A product can fully comply with its specifications and still fail to meet user needs.</p>
<p>This failure can arise from an array of causes, such as specifications that were incomplete or misaligned with requirements, or the use of a system that was not a reliable predictor of success.</p>
<p>When a metric is implemented incorrectly, scoring rules are applied inconsistently, or translators, evaluators, and validators rely on mismatched specifications, the issue is one of verification failure.</p>
<p>By contrast, when a metric is correctly implemented but is based on specifications that do not or no longer reflect current user requirements, the resulting scores may not support fitness for intended use.</p>
<p>Reliability issues are particularly important in this context: even a well-designed and correctly implemented TQE system cannot support valid decisions if its results are unstable. For example, inadequate evaluator training may lead to poor inter-rater or intra-rater agreement, undermining confidence in quality scores.</p>
<p>The following example illustrates product-level validation failure despite both product and system verification success. A healthcare organization consistently achieved 95% quality scores on patient-facing medication instructions using a rigorously implemented TQE system. Post-deployment analysis revealed critical comprehension failures: patients misunderstood dosing schedules and contraindication warnings. The specifications were being met—terminology was consistent, style guides were followed, error counts were low. But the specifications didn&#8217;t reflect how patients actually needed to process safety-critical information under cognitive load, time pressure, or health literacy constraints. The failure wasn&#8217;t procedural. It was a validation gap that specification compliance couldn&#8217;t detect.</p>
<h1>Two Objects, Two Levels of Assurance: Product vs. System</h1>
<p>To complicate matters, another source of confusion in translation quality discussions is the failure to distinguish between two different objects: 1) t<strong>he translation product</strong> (the translated content) and 2) t<strong>he TQE system</strong> (the metric, dimensions, weights, thresholds, sampling, and evaluators used to assess that content).</p>
<p>Verification and validation apply to both—but they do so at different logical levels. Activities at one level cannot substitute for activities at the other.</p>
<h2>Product-Level Verification and Validation</h2>
<h3>Product Verification: Conformance to Specifications</h3>
<p>At the product level, verification addresses the question of whether<em> a translation conforms to its project specifications. </em>Project specifications typically include terminology requirements, style guide adherence, and process constraints, such as revision requirements or use or non-use of MT.</p>
<p>Product verification checks whether such specifications have been implemented correctly. This is the domain of linguistic checks, QA tools, and analytic quality evaluation using metrics such as MQM.</p>
<p>For further detail on MQM dimensions, error taxonomies, and metric design, see resources published by the <a href="https://themqm.org/" target="_blank" rel="noopener">MQM Counci</a>l.</p>
<p>It is critical to note that <strong>verification operates entirely within the space of specifications</strong>. It does not directly assess stakeholder needs, as it assumes that the specifications adequately represent those needs.</p>
<h3>Product Validation: Fitness for Purpose</h3>
<p>Product validation addresses a different question, namely, whether a<em> translation is fit for its intended communicative purpose and usable by its target audience</em>. This question cannot be answered by specification compliance alone. Product validation requires evidence from outside the verification process, such as:</p>
<ul>
<li>Task success rates,</li>
<li>Stakeholder acceptance,</li>
<li>Real-world deployment outcomes.</li>
</ul>
<p>Product validation may occur occasionally rather than systematically, and when it fails, it should trigger an investigation on the root causes: Were the specifications implemented correctly? Are the specifications still appropriate? Have requirements changed?</p>
<h1>System-Level Verification and Validation</h1>
<h3>TQE System Verification: Correct Implementation</h3>
<p>At the system level, verification addresses whether <em>a TQE system has been implemented and applied as defined</em>. This includes confirming that processes are consistent and repeatable with:</p>
<ul>
<li>Metrics are applied as designed.</li>
<li>Scoring rules are followed.</li>
<li>Evaluators are trained to apply the metric consistently.</li>
</ul>
<p>System verification ensures procedural correctness. It does <strong>not</strong> answer whether the system measures what actually matters.</p>
<h3>TQE System Validation: Confidence for Decision-Making</h3>
<p>System validation addresses a hard question: <em>whether the TQE system reliably supports correct quality-related decisions for a given context</em>. In practice, this means addressing whether:</p>
<ul>
<li>High scores reliably correspond to translations that meet stakeholder requirements.</li>
<li>Low scores reliably flag translations that do not.</li>
<li>Decision thresholds (publish, revise, reject) lead to appropriate outcomes.</li>
</ul>
<p>Answering these questions requires <strong>meta-evidence</strong>, which emerges from outside the specifications space:</p>
<ul>
<li>Expert review of quality decisions,</li>
<li>Alignment between evaluation outcomes and user acceptance,</li>
<li>Correlation between scores and downstream outcomes (such as task success, user complaints, support tickets)</li>
</ul>
<p>A TQE system can be fully verified and still be ineffective if it has never been validated for the decisions that it informs.</p>
<h1>Risk, Uncertainty, and the Limits of Scores</h1>
<h2>When a Passing Score Fails to Meet Users Needs and Expectations (The Disney Problem)</h2>
<p>Consider a scenario where a translation complies with all specifications. It is evaluated by a correctly implemented TQE system. It passes all thresholds. And yet… users reject it. Nothing has gone wrong at the level of verification. The failure occurs at the level of validation. This can happen when:</p>
<ul>
<li>Specifications no longer reflect user needs, leading to costly post-launch revisions or brand reputation damage in key markets.</li>
<li>Error weights do not reflect real risk.</li>
<li>Sampling introduces excessive uncertainty, potentially exposing entire product lines to undetected risk in user-critical segments.</li>
<li>Evaluators lack access to relevant specifications.</li>
</ul>
<p>The lesson is simple but uncomfortable: <strong>procedural correctness does not guarantee effectiveness</strong>.</p>
<h2>Risk Management as Validation Variable</h2>
<p>Quality scores are often treated as deterministic signals. But, in reality, they are inferences made under uncertainty. As such, their interpretation requires implementers to consider additional questions, including:</p>
<ul>
<li>How close is the score to the decision threshold?</li>
<li>What is the level of evaluator agreement?</li>
<li>How much uncertainty is introduced by sampling?</li>
<li>Do different quality dimensions tell a consistent story?</li>
</ul>
<p>Confidence intervals, inter-rater agreement, and internal consistency are not “nice to have” analytics; they are potential sources of validation evidence. They help determine whether a score is a reliable basis for decision-making or whether additional validation (such as user testing) is warranted.</p>
<p>Validation failures announce themselves late and expensively. Organizations discover that &#8220;compliant&#8221; translations don&#8217;t work only after launch, when typical correction costs run 10-30x higher than proactive validation. Emergency re-translations compress timelines and strain vendor relationships. Support tickets from confused users accumulate faster than quality scores predicted. Product launches delay while teams troubleshoot translations that technically passed all checks. In regulated industries, inadequate validation creates audit exposure that procedural compliance alone cannot address. These costs don&#8217;t appear in quality reports because unvalidated systems measure activity, not outcomes.</p>
<h1>Seven Tips to Validate Without Overcomplicating Quality Management</h1>
<p>Validation does not require continuous user testing or exhaustive experimentation. It requires <strong>targeted evidence that reduces the most relevant uncertainties</strong>. The following practices provide practical entry points for validation at both product and system level.</p>
<p>The following tips can guide validation for implementation in high-volume and risk-sensitive translation workflows.</p>
<h2>1. Validate at Decision Boundaries, Not Everywhere</h2>
<p>Validation efforts should concentrate where <strong>decisions carry risk</strong>, so validation should be prioritized when:</p>
<ul>
<li>Scores cluster near publish/reject thresholds,</li>
<li>Quality dimensions conflict (e.g., fluency high, borderline accuracy),</li>
<li>Content is user-facing, safety-critical, or brand-sensitive.</li>
</ul>
<p>Rather than applying validation efforts uniformly, targeting boundary areas focuses efforts on high impact areas.</p>
<h2>2. Use Targeted User Evidence, Not General Feedback</h2>
<p>Validation evidence should be <strong>purpose-specific</strong>. Fitness for purpose is not a matter of whether users <em>like</em> a translation. Instead, concrete evidence can come from:</p>
<ul>
<li>Completing intended tasks,</li>
<li>Understanding key messages without the need for clarification,</li>
<li>Avoiding risk points (legal, medical, safety) based on content.</li>
</ul>
<p>Task success is often more informative than subjective preference—and Likert scale preferences have been shown to be ill-aligned with task success metrics.</p>
<h2>3. Exploit Disagreement as a Validation Signal</h2>
<p>Evaluator disagreement is often treated as noise. In validation, it should be treated as data. Track inter-rater disagreement on high-impact dimensions and recurrent disputes on the same error types. Persistent disagreement may indicate that:</p>
<ul>
<li>Specifications are underspecified, or</li>
<li>The TQE system is not aligned with real decision criteria, undermining confidence in every quality decision it informs.</li>
</ul>
<h2>4. Test the TQE System Against Known Outcomes</h2>
<p>System validation requires <strong>ground truth</strong>, even if imperfect. Periodically compare TQE outcomes against content that previously triggered user complaints and content that performed well in real-world use.</p>
<p>Analyze false positives (“passed but failed in use”) and false negatives (“failed but worked fine”). Both outcomes become validation findings.</p>
<h2>5. Treat Sampling as a Validation Risk</h2>
<p>Sampling decisions introduce uncertainty that verification cannot remove. Unlike sampling ideally identical outputs of tangible manufacturing, translation sampling for quality evaluation is sampling across heterogeneous content with uneven user impact, where defects are not randomly distributed and consequences vary by context.</p>
<p>For sampled evaluations, explicitly ask:</p>
<ul>
<li>What failure modes could this sample miss?</li>
<li>Is the sample representative of user-critical content?</li>
<li>Would a different sample plausibly change the decision?</li>
</ul>
<p>If the answer is “yes,” additional validation may be warranted.</p>
<h2>6. Revalidate When Context Changes</h2>
<p>Validation is not a one-time activity. Changes in context invalidate old assumptions faster than they invalidate procedures. Trigger revalidation when:</p>
<ul>
<li>Target audiences change</li>
<li>Content type or risk profile shifts</li>
<li>New MT or post-editing processes are introduced</li>
<li>Quality scores drift without corresponding outcome evidence</li>
</ul>
<h2>7. Document Validation as a Rationale, Not a Score</h2>
<p>Validation should support <strong>decision confidence</strong> rather than produce another metric. To keep validation lightweight while making uncertainty explicit, capture validation outcomes as:</p>
<ul>
<li>Short decision rationales.</li>
<li>Assumptions tested and confirmed (or rejected).</li>
<li>Residual risks accepted.</li>
</ul>
<h1>A Layered View of Translation Quality Assurance</h1>
<p>Maintaining a clear separation between verification and validation ensures that 1) compliance is not mistaken for effectiveness; 2) confidence in quality decisions is justified; 3) risk is managed transparently; and 4) standards remain clear.</p>
<p>A robust translation quality framework respects the following chain:</p>
<ol>
<li><strong>Requirements</strong> are about stakeholder needs and expectations</li>
<li><strong>Specifications</strong> are operationalized requirements</li>
<li><strong>Verification (TQE)</strong> is the analytic evaluation of compliance</li>
<li><strong>Scores &amp; Decisions</strong> indicate inferred fitness</li>
<li><strong>Validation</strong> is evidence that inference is reliable</li>
<li><strong>Recalibration</strong> adjusts specifications and system to obtain valid and reliable scores.</li>
</ol>
<p>Confusing the two may be operationally convenient, but it is conceptually unsound and increasingly risky in complex, high-stakes translation contexts.</p>
<p>Verification tells us whether we did what we said we would do. Validation tells us whether doing so actually works. Validation closes the quality evaluation loop. Without it, quality management becomes self-referential.</p>
<h1>Building Justified Confidence in Quality Decisions</h1>
<p>The separation between verification and validation isn&#8217;t academic—it&#8217;s the difference between measuring activity and measuring outcomes. Organizations that build validation into their quality frameworks gain not just better translations, but justified confidence in their quality decisions under uncertainty.</p>
<p>If your quality scores aren&#8217;t reliably predicting fitness for purpose, the problem isn&#8217;t the translators—it&#8217;s the system. And unlike verification failures, validation failures don&#8217;t announce themselves until correction becomes expensive. The question isn&#8217;t whether your translations comply with specifications. The question is whether your specifications—and the system that measures them—reliably predict what actually matters.</p>
<p>Validation closes the quality evaluation loop. Without it, quality management becomes self-referential, measuring its own compliance rather than stakeholder success.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-2494 size-full" src="https://plainlii.com/wp-content/uploads/2026/01/Val-Ver-1.svg" alt="Table comparing verification and validation at two levels. Rows distinguish the translation product and the TQE system. Columns distinguish verification and validation. Product-level verification concerns conformance to specifications such as terminology and style, while product-level validation concerns fitness for intended use based on user outcomes. TQE system verification concerns correct metric implementation and evaluator consistency, while system validation concerns whether quality scores reliably support correct decisions." width="2400" height="1600" /></p>
<p>Figure 1: Table comparing verification and validation at two levels. Rows distinguish the translation product and the TQE system. Columns distinguish verification and validation. Product-level verification concerns conformance to specifications such as terminology and style, while product-level validation concerns fitness for intended use based on user outcomes. TQE system verification concerns correct metric implementation and evaluator consistency, while system validation concerns whether quality scores reliably support correct decisions.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A New Year Wish: Clarity</title>
		<link>https://plainlii.com/es/2025/12/30/a-new-year-wish-clarity/</link>
		
		<dc:creator><![CDATA[romina@plainlii.com]]></dc:creator>
		<pubdate>Tue, 30 Dec 2025 22:20:55 +0000</pubdate>
				<category><![CDATA[Uncategorized]]></category>
		<guid ispermalink="false">https://plainlii.com/?p=2472</guid>

					<description><![CDATA[A New Year Wish: Clarity At the start of every new year, we talk about resolutions, new initiatives, new tools, new ways of working. But after years of working with writers, leaders, and public-facing organizations, I’ve learned that progress rarely comes from adding more. It comes from making things clearer. So my New Year wish—for [&#8230;]]]></description>
										<content:encoded><![CDATA[<h1>A New Year Wish: Clarity</h1>
<p>At the start of every new year, we talk about resolutions, new initiatives, new tools, new ways of working. But after years of working with writers, leaders, and public-facing organizations, I’ve learned that progress rarely comes from adding more.</p>
<p>It comes from making things clearer.</p>
<p>So my New Year wish—for our readers, our clients, and our teams—is simple: Clarity. Clarity in how we write. Clarity in how we lead. Clarity in how people experience the systems we design.</p>
<p>I’ve seen firsthand how unclear communication slows down good work. Policies that are technically correct but difficult to use. Forms that ask too much, too fast. Guidance built on assumptions instead of meeting people where they are. None of this comes from carelessness. It usually comes from expertise that hasn’t yet been translated into something usable. That’s where clarity matters most.</p>
<p>Clarity isn’t about compromising meaning. It’s about making relevant meaning visible. It’s about anticipating questions, removing friction, and recognizing that people are often reading for action under pressure—while applying for services, meeting deadlines, or trying to make the right decision under duress.</p>
<p>When communication is clear:</p>
<ul>
<li>People act with confidence instead of hesitation.</li>
<li>Errors and follow-up questions decrease.</li>
<li>Staff spend less time explaining and more time serving.</li>
<li>Trust builds—not through slogans, but through shared experience.</li>
</ul>
<p>This is why I believe clarity is both a strategic and a human choice. For writers, clarity is an act of craft and care. For leaders, it’s an investment in alignment and efficiency. For clients and communities, it’s the difference between moving forward and getting stuck.</p>
<p>Plain language is often misunderstood as a writing style or a final editorial pass. In reality, it is a strategic tool that shapes how effectively an organization functions.</p>
<p>When plain language is applied early, it clarifies priorities, exposes gaps in thinking, and fosters alignment across teams. If a process can’t be explained clearly, it’s often because roles, decisions, or expectations aren’t clear themselves. Plain language makes those issues visible and can help correct them before they show up as errors, delays, or confusion for the people we serve.</p>
<p><img loading="lazy" decoding="async" class="wp-image-2474  aligncenter" src="https://plainlii.com/wp-content/uploads/2025/12/clarity-3-300x200.png" alt="Illustration of a lightbulb and pencil emerging from a gear, representing clarity of thought, intentional design, and clear writing working together." width="326" height="217" srcset="https://plainlii.com/wp-content/uploads/2025/12/clarity-3-300x200.png 300w, https://plainlii.com/wp-content/uploads/2025/12/clarity-3-1024x683.png 1024w, https://plainlii.com/wp-content/uploads/2025/12/clarity-3-768x512.png 768w, https://plainlii.com/wp-content/uploads/2025/12/clarity-3-1536x1024.png 1536w, https://plainlii.com/wp-content/uploads/2025/12/clarity-3-2048x1365.png 2048w, https://plainlii.com/wp-content/uploads/2025/12/clarity-3-18x12.png 18w" sizes="(max-width: 326px) 100vw, 326px" /></p>
<p style="text-align: center;" data-start="243" data-end="410"><em data-start="259" data-end="410">A lightbulb and pencil emerging from a gear represent clarity of thought, intentional design, and clear writing working together.</em></p>
<p>Used strategically, plain language:</p>
<ul>
<li>Reduces risk by minimizing misinterpretation and rework</li>
<li>Supports consistency across departments and channels</li>
<li>Strengthens trust by making systems more transparent and navigable</li>
<li>Speeds decision-making by clarifying actions, roles, and expectations</li>
<li>Improves operational efficiency by reducing reliance on workarounds and supporting process documentation</li>
</ul>
<p>This is why plain language belongs at the planning table, not just in editing cycles. It helps organizations design communication and processes that work together, instead of asking writing to compensate for unclear systems.</p>
<p>When leaders treat plain language as strategy, clarity becomes scalable. It’s no longer dependent on individual writers or one-off revisions—it becomes part of how work gets done.</p>
<p>As we move into this new year, I invite us to pause before publishing, launching, or rolling out the next “update” and ask:</p>
<ul>
<li>What does someone need to do after reading this?</li>
<li>What might confuse or overwhelm them from the text?</li>
<li>What might confuse or overwhelm them from the process or system behind the text?</li>
<li>How can we make the text, process, and system easier to understand and use?</li>
</ul>
<p>Considering the alignment between communication and processes has a compounding effect.</p>
<p>My hope for the year ahead is that we choose clarity not as a final polish, but as a starting point. Clarity starts with how we think and design processes, not how we edit sentences at the end. When processes are designed for the people who use them and writing about the processes is clear, everyone benefits.</p>
<p>Here’s to a year of fewer assumptions, better understanding, and work that moves us forward—clearly.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Plain Language for Knowledge Management: From Clear Documentation to Operational Intelligence</title>
		<link>https://plainlii.com/es/2025/12/16/plain-language-knowledge-infrastructure/</link>
		
		<dc:creator><![CDATA[newemage]]></dc:creator>
		<pubdate>Tue, 16 Dec 2025 01:42:55 +0000</pubdate>
				<category><![CDATA[Uncategorized]]></category>
		<guid ispermalink="false">https://plainlii.com/?p=2456</guid>

					<description><![CDATA[Plain Language for Knowledge Management: From Clear Documentation to Operational Intelligence Clear Communication Can Formalize Operational Intelligence—Especially in Today’s Distributed Teams “The path forward requires acknowledging a hard truth: you cannot manage what you do not understand, and you cannot understand what you have not bothered to document and internalize in knowledge management systems.”  Jessica [&#8230;]]]></description>
										<content:encoded><![CDATA[<h1>Plain Language for Knowledge Management: From Clear Documentation to Operational Intelligence</h1>
<p>Clear Communication Can Formalize Operational Intelligence—Especially in Today’s Distributed Teams</p>
<p><span style="font-size: 16px; font-style: normal; font-weight: 400;">“The path forward requires acknowledging a hard truth: you cannot manage what you do not understand, and you cannot understand what you have not bothered to document and internalize in knowledge management systems.” </span></p>
<p><a href="https://jessicatalisman.substack.com/p/process-knowledge-management-part-c45" target="_blank" rel="noopener">Jessica Talisman</a></p>
<p>Plain language is often described as communications refinement: shorter sentences, simpler words, fewer acronyms. That framing undersells its strategic value. When applied to process documentation, plain language functions as knowledge infrastructure—a way to surface, formalize, and transfer operational intelligence that would otherwise remain implicit, obscured, fragmented, or lost.</p>
<p>This is particularly true in offshore and distributed operations, where undocumented assumptions and “tribal knowledge”—unwritten know-how—quietly become sources of risk and inefficiency.</p>
<h2>The Real Problem: Unclear Knowledge, Not Just Unclear Writing</h2>
<p>Most organizations treat process documentation as a compliance artifact—something produced to satisfy ISO requirements or formulaic auditing steps. As a result, procedures often under-describe what to do and evade explaining why, when, or how decisions are actually made.</p>
<p>In offshore and distributed contexts, this gap is amplified. Onshore and in person teams rely on context built through proximity, informal conversations, and shared history. Offshore and remote teams inherit the tasks, but not the tacit knowledge and hallway troubleshooting that makes those tasks successful. The result is predictable:</p>
<ul>
<li>Repeated clarification requests across time zones</li>
<li>Over-reliance on senior staff to interpret intent</li>
<li>Inconsistent outcomes masked as “execution issues”</li>
<li>Slow onboarding and fragile continuity when people leave</li>
</ul>
<p>These are not communication failures. They are failures of knowledge capture.</p>
<figure id="attachment_2466" aria-describedby="caption-attachment-2466" style="width: 463px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-2466" src="https://plainlii.com/wp-content/uploads/2025/12/iceberg-1-300x200.png" alt="iceberg representing explicit knowledge as the visible part and implicit knowledge as the invisible part of knowledge" width="463" height="308" srcset="https://plainlii.com/wp-content/uploads/2025/12/iceberg-1-300x200.png 300w, https://plainlii.com/wp-content/uploads/2025/12/iceberg-1-1024x683.png 1024w, https://plainlii.com/wp-content/uploads/2025/12/iceberg-1-768x512.png 768w, https://plainlii.com/wp-content/uploads/2025/12/iceberg-1-1536x1024.png 1536w, https://plainlii.com/wp-content/uploads/2025/12/iceberg-1-2048x1365.png 2048w, https://plainlii.com/wp-content/uploads/2025/12/iceberg-1-18x12.png 18w" sizes="(max-width: 463px) 100vw, 463px" /><figcaption id="caption-attachment-2466" class="wp-caption-text">Explicit knowledge is often only a fraction of the organizational knowledge</figcaption></figure>
<h2>Plain Language as a Tool for Making the Implicit Visible</h2>
<p>Applied rigorously, plain language forces an organization to articulate what it actually knows—not what it assumes people will infer. The act of writing clearly exposes gaps between documented processes and lived reality.</p>
<p>Specifically, plain language accelerates procedural knowledge formalization by requiring teams to:</p>
<ul>
<li>Break work into discrete, observable steps</li>
<li>Identify decision points, conditions, and exceptions</li>
<li>Make assumptions explicit rather than implied</li>
<li>Distinguish between rules, guidance, and judgment calls</li>
<li>Use consistent structures and terminology across procedures</li>
</ul>
<p>In other words, plain language reverse-engineers expertise. It extracts what experienced staff “just know” and makes it transferable.</p>
<h3>A quick story: The Secretary Who Took the System With Her</h3>
<p>A common knowledge-management anecdote tells of an office where a long-serving secretary maintained a flawless filing system. Documents were always easy to get—until she left. Overnight, retrieval became nearly impossible. The files were still there, but the holder of the logic behind them was not.</p>
<p>This story endures because it captures a universal organizational risk: processes can appear stable while being fundamentally non-transferable. When expertise remains implicit, the system walks out the door with the expert.</p>
<p>Plain language addresses this failure mode by forcing organizations to make their reasoning explicit—so processes remain usable even when people move on.</p>
<h2>Operational Benefits for Teams</h2>
<p>For all teams, and especially for distributed teams, clear, plain-language process documentation delivers concrete operational advantages:</p>
<ul>
<li>Reduced dependency on synchronous communication. Teams can execute independently without waiting for clarification calls or Slack threads.</li>
<li>Faster, more reliable onboarding. New hires learn the process as it actually works, not as it is informally explained.</li>
<li>Clearer accountability. Documentation defines what “done” means, reducing ambiguity and rework.</li>
<li>Continuous improvement from the front line. When processes are intelligible, teams can identify inefficiencies themselves rather than simply executing flawed workflows.</li>
</ul>
<p>The shift is subtle but significant: <a href="https://www.aims-international.org/AIMSijm/papers/19-1-2.pdf" target="_blank" rel="noopener">hollow</a>, modular, and virtual organizations can rebuild the link to core processes.</p>
<h2>Where the RAISE™ Framework Becomes Operational</h2>
<p>This is where a structured framework such as <a href="https://plainlii.com/es/2025/12/04/plain-language-framework-five-principles/">RAISE™</a> moves plain language beyond generic training and into operational design:</p>
<ul>
<li><strong>Relevance</strong><br />
Does the process capture what people actually need to know to do the work, or just<br />
That the organization thinks should be documented?</li>
<li><strong>Access</strong><br />
Can someone in Manila or Bangalore execute the process without relying on informal escalation to a head office?</li>
<li><strong>Intelligibility</strong><br />
Are decision points defined clearly enough that two people would reach the same conclusion?</li>
<li><strong>Suitability</strong><br />
Does the documentation reflect how the work truly flows for users, rather than how it was imagined during design?</li>
<li><strong>Efficacy</strong><br />
Can the organization measure whether clearer processes reduced errors, questions, or rework?</li>
</ul>
<p>Using plain language within this framework turns documentation into a testable operational asset rather than static text.</p>
<h3>Complaints and Feedback as Diagnostic Signals</h3>
<p>One underused input into this work is complaint and feedback language. Complaints are rarely just emotional reactions; they often point directly to where documented processes diverge from reality. When people say, “This step doesn’t make sense,” or “We always have to ask for clarification,” they are identifying knowledge gaps.</p>
<p>Analyzed systematically, this type of language becomes a diagnostic tool for process improvement—highlighting where assumptions are unstated, decision logic is missing, or responsibilities are unclear.</p>
<h2>A Different Kind of Deliverable</h2>
<p>Positioned as a bridge towards procedural knowledge, the outcome of plain language work is not simply “clearer documents,” but deliverables that leverage clarity towards a stronger culture:</p>
<ul>
<li>Decision maps showing where expertise is actually applied</li>
<li>Knowledge-gap analyses that identify undocumented assumptions</li>
<li>Standardized procedures that function across geographies and experience levels</li>
<li>Measurable reductions in clarification requests, escalations, and rework</li>
</ul>
<p>This reframes plain language as organizational infrastructure—a way to preserve institutional knowledge, enable scale, and reduce operational risk.</p>
<h2>From Communication Polish to Operational Intelligence</h2>
<p>When plain language is treated as a strategic capability, it becomes a lever for knowledge management, helping organizations capture what they know, make it usable across boundaries, and ensure that expertise does not remain locked in individual heads or local contexts.</p>
<p>In distributed operations, that shift is not optional. It is foundational.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Plain Language Has No Political Color Because It Supports Everyone</title>
		<link>https://plainlii.com/es/2025/12/04/plain-language-for-everyone/</link>
		
		<dc:creator><![CDATA[newemage]]></dc:creator>
		<pubdate>Thu, 04 Dec 2025 21:52:14 +0000</pubdate>
				<category><![CDATA[Uncategorized]]></category>
		<guid ispermalink="false">https://plainlii.com/?p=2343</guid>

					<description><![CDATA[Plain Language Has No Political Color — And Europe Is Proving It Today marks an exciting moment in the trajectory of plain language: the European Parliament has officially renamed its translation service the Directorate-General for Translation and Clear Language. Beyond the institutional news, this change reflects something Plainlii has championed from the start: plain language [&#8230;]]]></description>
										<content:encoded><![CDATA[<h1><strong>Plain Language Has No Political Color — And Europe Is Proving It</strong></h1>
<p><strong>Today marks an exciting moment in the trajectory of plain language: the European Parliament has officially renamed its translation service the <em>Directorate-General for Translation and Clear Language</em>.</strong></p>
<p>Beyond the institutional news, this change reflects something Plainlii has championed from the start: <strong>plain language is not a political stance — it’s a democratic essential</strong>.</p>
<h2><strong>A Growing Movement Toward Clarity</strong></h2>
<p>Across governments and organizations, plain language is no longer seen as an optional communication style or a kindness reserved for newcomers or vulnerable groups. It is becoming a <strong>pillar of democratic participation</strong>, an essential part of public trust, and a practical tool for good governance and effective business.</p>
<p>The European Parliament’s decision to explicitly include <em>Clear Language</em> in its official name marks a milestone in this trajectory — a sign that clarity, accessibility, and transparency are not afterthoughts but core responsibilities.</p>
<h2><strong>Plain Language Has No Political Color</strong></h2>
<p>Plain Language has no political color. It’s a tool that serves everyone — regardless of party affiliation or ideology.</p>
<p>When government agencies, organizations, and businesses communicate clearly, they build trust across the political spectrum. Plain language removes barriers that prevent people from understanding their rights, fulfilling obligations, and taking part in civic life.</p>
<p>In the private sector, clear communication:</p>
<ul>
<li>builds customer loyalty</li>
<li>prevents disputes</li>
<li>reduces operational complexity</li>
<li>and saves time and money</li>
</ul>
<p>Clarity benefits:</p>
<ul>
<li><strong>Conservatives</strong> who want efficient, accountable institutions</li>
<li><strong>Progressives</strong> advocating for accessible services</li>
<li><strong>Independents</strong> seeking transparency in decision-making</li>
<li><strong>All people</strong> who deserve to understand information that affects their lives</li>
</ul>
<p>Plain language works because it belongs to no one — and everyone.</p>
<h2><strong>Why the European Parliament’s Move Matters</strong></h2>
<p>The Parliament’s new name signals that plain language is becoming embedded in the very structure of European democracy.</p>
<p>It acknowledges that:</p>
<ul>
<li>translation and comprehension must go hand in hand</li>
<li>multilingual democracy depends on accessibility</li>
<li>citizens deserve information they can understand immediately</li>
<li>clarity is essential for trust, equity, and democratic legitimacy</li>
</ul>
<p>This isn’t branding. It’s an institutional shift — and a powerful example for governments worldwide.</p>
<figure id="attachment_2344" aria-describedby="caption-attachment-2344" style="width: 300px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="size-medium wp-image-2344" src="https://plainlii.com/wp-content/uploads/2025/12/pl-all-300x274.jpg" alt="Plain Language benfits all Two speech bubbles in red and blue intersect." width="300" height="274" srcset="https://plainlii.com/wp-content/uploads/2025/12/pl-all-300x274.jpg 300w, https://plainlii.com/wp-content/uploads/2025/12/pl-all-768x701.jpg 768w, https://plainlii.com/wp-content/uploads/2025/12/pl-all-13x12.jpg 13w, https://plainlii.com/wp-content/uploads/2025/12/pl-all.jpg 931w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption id="caption-attachment-2344" class="wp-caption-text">Plain Language has no political color. It&#8217;s a tool that serves everyone—regardless of party affiliation or ideology.</figcaption></figure>
<h2><strong>Pomp and Gobbledygook Close Doors. Plain Language Opens Them.</strong></h2>
<p>Pompous and dense writing exclude rather than informs, and at best gives &#8220;the illusion of having learned&#8221; (see a Dr. Fox Experiment, <a href="http://romanfrigg.org/wp-content/uploads/links/Dr_Fox_Lecture.pdf" target="_blank" rel="noopener">paper</a> and <a href="https://www.youtube.com/watch?v=RcxW6nrWwtc" target="_blank" rel="noopener">video</a>) . Bad writing allows bad actors to hide harmful terms and gives pedants a</p>
<p>space to mask their ignorance. This happens both in public-facing communication and in technical documents in which writing becomes unnecessarily complicated.</p>
<p><strong>Technical complexity is not the enemy. Unnecessary opacity is.</strong></p>
<p>Subject-matter experts need precision, and focused fields genuinely require specialized terminology. But too often, technical communication becomes dense by habit, not necessity — written to impress peers rather than communicate. Internally, this breeds confusion, slows decision-making, and creates silos where only a few people truly understand what’s going on.</p>
<p>Meanwhile, outside of expert circles, people sometimes use jargon-laden lay language — not because it’s clearer, but because it <em>sounds</em> authoritative. The effect is the same: it shuts people out.</p>
<p>Plain language does the opposite:</p>
<ul>
<li><strong>it fosters trust</strong></li>
<li><strong>it promotes accountability</strong></li>
<li><strong>it strengthens civic participation</strong></li>
<li><strong>it empowers both majority and minority voices</strong></li>
<li><strong>it supports informed decision-making inside organizations and industries</strong></li>
<li><strong>it helps experts communicate accurately <em>and</em> accessibly without sacrificing precision</strong></li>
</ul>
<p>Good governance and good business both require informed participants — whether those participants are citizens, customers, colleagues, or technical stakeholders.</p>
<p><strong>Plain language is how we get there — together.</strong></p>
<h2><strong>Clarity Is the Future</strong></h2>
<p>At Plainlii, we see the European Parliament’s decision as a sign of what’s ahead: a world where clarity is expected, not exceptional — where people are treated with respect through communication they can understand.</p>
<p>Whether someone is a newcomer or a lifelong resident, multilingual or monolingual, highly educated or learning as they go — <strong>everyone deserves clarity</strong>.</p>
<p>Plain language isn’t political. It isn’t ideological. It’s fundamental.</p>
<p>Good communication doesn’t divide; it connects.</p>
<p>Plain language is about thinking clearly and expressing ideas in a fitting style for the audience. Above all, it’s a commitment to <strong>respect and shared understanding in business and civic life</strong>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>GTC 2025 AI Conference: at the cusp of synthetic evolution</title>
		<link>https://plainlii.com/es/2025/03/24/nvidiagtc25aiconferencewhatsnextinai/</link>
					<comments>https://plainlii.com/es/2025/03/24/nvidiagtc25aiconferencewhatsnextinai/#respond</comments>
		
		<dc:creator><![CDATA[newemage]]></dc:creator>
		<pubdate>Mon, 24 Mar 2025 18:59:38 +0000</pubdate>
				<category><![CDATA[Uncategorized]]></category>
		<guid ispermalink="false">https://plainlii.com/?p=1888</guid>

					<description><![CDATA[GTC 2025 Conference Summary Wow, what an event!  The NVIDIA GTC 2025 conference, held March 17-21 in San Jose, showcased the latest advancements in AI, accelerated computing, and their applications across a range of industries industries. This year&#8217;s event, described as the &#8220;Super Bowl of AI,&#8221; highlighted the remarkable progress in generative AI, agentic AI, [&#8230;]]]></description>
										<content:encoded><![CDATA[<h1><strong>GTC 2025 Conference Summary</strong></h1>
<p>Wow, what an event!  The NVIDIA GTC 2025 conference, held March 17-21 in San Jose, showcased the latest advancements in AI, accelerated computing, and their applications across a range of industries industries. This year&#8217;s event, described as the &#8220;Super Bowl of AI,&#8221; highlighted the remarkable progress in generative AI, agentic AI, and physical AI, with a particular focus on tokens as the foundation of inference. Jensen Huang, NVIDIA’s CEO, even said he’s going to have to grow San Jose to keep hosting an ever growing conference!. I was particularly interested in the idea of tokens now detached (in a way) from language or images, synthetic data generation, AI for medical and materials research, and robotics for everyday tasks.</p>
<figure id="attachment_1889" aria-describedby="caption-attachment-1889" style="width: 505px" class="wp-caption alignnone"><img loading="lazy" decoding="async" class="wp-image-1889" src="https://plainlii.com/wp-content/uploads/2025/03/IMG_4629-300x138.png" alt="Huang shares a problem that can only be solved by AI or a mother in law:I need to seat 7 people around a table at my wedding reception, but my parents andin-laws should not sit next to each other. Also, my wife insists we look better in pictures when she's on my left, but I need to sit next to my best man. How do I seat us on a round table? But then, what happens if we invited our pastor to sit with us?" width="505" height="232" /><figcaption id="caption-attachment-1889" class="wp-caption-text">Huang at Keynote: he shares a problem that can only be solved by AI or a mother in law: I need to seat 7 people around a table at my wedding reception, but my parents and in-laws should not sit next to each other. Also, my wife insists we look better in pictures when she&#8217;s on my left, but I need to sit next to my best man. How do I seat us on a round table? But then, what happens if we invited our pastor to sit with us?</figcaption></figure>
<h1><strong>Tokens: The Building Blocks of AI</strong></h1>
<p>So let’s start with the new perspective on tokens as the fundamental building blocks of AI. In his keynote, Huang emphasized how tokens have &#8220;opened a new frontier&#8221; in AI development. Tokens transform raw data into meaningful insights, enabling AI systems to generate responses, analyze scientific data, and reason through complex problems. And before we get into a discussion about reasoning, no it’s not human reasoning, but it is reasoning nonetheless—stay tuned.</p>
<p>Huang addressed inference as the process of generating tokens and described it as &#8220;the ultimate extreme computing problem.&#8221; The ability to produce tokens efficiently is crucial for AI systems&#8217; responsiveness and utility. No doubt this is important to the revenue equation, which Huang was quick to point out: &#8220;Inference is token generation by a factory, and a factory is revenue and profit generating,&#8221; But this is interesting from the perspective of taking evolution in our hands. In fact, this can be summarized by something Eric Steinberg, from Magic.dev, mentioned in the Beyond Prediction session: “I think reinforcement learning is good… It’s very hard to build superintelligence by training on data generated by humans.” (Yes, think on that for a bit.)</p>
<p>The computation requirements for token generation have increased dramatically with the advent of agentic AI and reasoning capabilities. Current models are generating 100 times more tokens than anticipated just a year ago. This explosion in token generation necessitates more powerful computing infrastructure to maintain AI system responsiveness.</p>
<p>For example, Huang demonstrated how a reasoning model generated over 8,600 tokens to solve a complex wedding seating problem, compared to around 400 tokens for a simple one-shot response from a traditional language model. This increase in tokens, combined with the need for faster computation, has driven the need for more sophisticated AI infrastructure.</p>
<p>To address these challenges, NVIDIA introduced NVIDIA Dynamo, described as &#8220;the operating system of an AI factory.&#8221; Dynamo manages the complex operations involved in token generation, including pipeline parallel, tensor parallel, expert parallel operations, and workload management. As an open-source tool, Dynamo aims to optimize token generation across different workloads and configurations.</p>
<h1><strong>Synthetic Data Generation and the Path to Superintelligence</strong></h1>
<p>A significant portion of the conference focused on how synthetic data generation is paving the road to more advanced AI systems. As Huang explained, training AI models faces two fundamental challenges: obtaining sufficient high-quality data and overcoming the limitations of human-in-the-loop training.</p>
<p>NVIDIA showcased several approaches to synthetic data generation that address these challenges:</p>
<h2><strong>Reinforcement Learning with Verified Results</strong></h2>
<p>One breakthrough highlighted at the conference is using reinforcement learning with verified results to train AI models. This approach leverages existing knowledge (like mathematical principles or puzzle solutions) to generate millions of examples for training. AI models are then rewarded as they improve at solving problems step by step.</p>
<p>&#8220;We have hundreds of these problem spaces where we can generate millions of different examples and give the AI hundreds of chances to solve it step by step,&#8221; Huang explained. This technique has enabled the generation of trillions of training tokens, vastly exceeding what would be possible with human-labeled data.</p>
<h2><strong>Digital Twins and Simulation for AV Development</strong></h2>
<p>NVIDIA demonstrated how Omniverse and Cosmos platforms are accelerating autonomous vehicle (AV) development through synthetic data generation. The process involves model distillation, closed-loop training, and 3D synthetic data generation.</p>
<p>In model distillation, knowledge transfers from a slower, more intelligent &#8220;teacher&#8221; model to a smaller, faster &#8220;student&#8221; model that can run efficiently in a vehicle. For closed-loop training, log data is transformed into 3D scenes for simulated driving scenarios, allowing the testing of trajectory generation capabilities without real-world risks.</p>
<p>The 3D synthetic data generation component enhances AV adaptability by creating detailed driving environments from log data, maps, and images. This approach provides diversity in training scenarios while closing the sim-to-real gap, as one presenter summarized: &#8220;Use AI to recreate AI.&#8221;</p>
<h2><strong>Scaling Laws and the Computing Requirements</strong></h2>
<p>The conference emphasized how synthetic data generation has changed our understanding of AI scaling laws. NVIDIA revealed that computation requirements for today&#8217;s agentic AI models are &#8220;easily 100 times more than we thought we needed this time last year.&#8221;</p>
<p>This increased demand is driven by two factors: AI models generating more tokens for reasoning (up to 100 times more) and the need for faster computation to maintain responsiveness. NVIDIA&#8217;s projections showed that Blackwell GPU shipments in their first year will significantly exceed the peak year of Hopper GPU shipments, reflecting the industry&#8217;s response to these demands.</p>
<h1><strong>AI for Medical and Materials Research</strong></h1>
<p>GTC 2025 showcased numerous applications of AI in advancing medical research and materials science, highlighting how computational methods are accelerating discoveries in these fields.</p>
<h2><strong>Digital Twins in Biomedicine</strong></h2>
<p>Professor Peter Coveney from University College London presented his work on computational biomedicine, particularly focusing on digital twins for personalized medicine. While digital twins are still nascent in biomedicine, they offer the potential to evaluate individuals based on their own data rather than population statistics.</p>
<p>&#8220;The issue is to get away from using AI in a population sense and claim that&#8217;s dealing with personalized medicine,&#8221; Coveney explained. &#8220;What it actually amounts to is building a large set of data on other people and using it to predict how you&#8217;re going to behave, which clearly isn&#8217;t personalized medicine.&#8221;</p>
<p>His team has developed HemeLB, a code that can model and simulate the entire human vasculature from head to toe. Running on supercomputers with up to 80,000 GPUs, this application enables unprecedented detail in cardiovascular simulation. By connecting organ models like the heart to the vasculature, researchers can study the coupling between blood flow and vital organs.</p>
<p>Coveney emphasized that GPU computing has transformed what&#8217;s possible in computational biomedicine by enabling high-resolution simulations that run fast enough for interactive use. Additionally, the integration of AI techniques with physics-based models helps accelerate simulations while maintaining scientific accuracy.</p>
<h2 class="text-lg font-bold text-text-200 mt-1 -mb-1.5">Human Brain Mapping at Cellular Resolution</h2>
<p class="whitespace-pre-wrap break-words">Professor Mohanasankar Sivaprakasam from IIT Madras delivered an impressive talk on his team&#8217;s groundbreaking work imaging and mapping whole human brains at the cellular level. Over the past five years, his Brain Center at IIT Madras has undertaken the ambitious task of understanding the cellular and connectivity structure of the human brain, similar to work previously done by the Allen Brain Institute with mouse brains. Recently, they released the first set of detailed brain image volumes at full cellular resolution, representing the largest collection of open-source human brain data available to researchers worldwide.</p>
<p class="whitespace-pre-wrap break-words">&#8220;We are not only making this data fully public, we are absolutely willing to work with anybody to scale and accelerate this at various levels,&#8221; Sivaprakasam explained. The project has already acquired close to 300 meticulously preserved post-mortem human brains, generating petabytes of ground-truth data. To make this massive dataset accessible, the team has developed AI-powered tools that allow researchers to navigate through brain structures with unprecedented detail. Speaking at 3 AM from his time zone, Sivaprakasam highlighted that this work represents a significant advancement in neuroanatomy, which he noted had somewhat stagnated in recent decades with the advent of MRI technology. The cellular-level detail his team has captured goes far beyond conventional millimeter-resolution imaging, allowing researchers to explore the estimated hundreds of billions of cells in the human brain, including both neurons and support cells. This resource will be invaluable for understanding brain structure and function, with potential applications ranging from studying neurological disorders to developing more sophisticated AI models inspired by human neural architecture.</p>
<figure id="attachment_1892" aria-describedby="caption-attachment-1892" style="width: 300px" class="wp-caption alignnone"><img loading="lazy" decoding="async" class="wp-image-1892 size-medium" src="https://plainlii.com/wp-content/uploads/2025/03/IMG_4498-300x225.jpeg" alt="Slide from session: Over the next 18-24 months, we are releasing the largest set of annotateuhuman brain image volumes across age groups and disease conditions Expanding our current partnerships to scale and accelerate - Cloud storage - Visualization and rendering of petabyte sized 3D volumes - Deep analysis and modeling of 100's of petabytes of primary data at kilobyte level resolution ..!! - Integrate multi-modal, multi-scale neuroscience data with brain histology data - Crowdsourcing Brain Data Annotation for Scientific Breakthroughs OLOG वििभयति कमज DRAS If you are excited to partner, please drop a note to mohan@ee.iitm.ac.in" width="300" height="225" srcset="https://plainlii.com/wp-content/uploads/2025/03/IMG_4498-300x225.jpeg 300w, https://plainlii.com/wp-content/uploads/2025/03/IMG_4498-1024x768.jpeg 1024w, https://plainlii.com/wp-content/uploads/2025/03/IMG_4498-768x576.jpeg 768w, https://plainlii.com/wp-content/uploads/2025/03/IMG_4498-1536x1152.jpeg 1536w, https://plainlii.com/wp-content/uploads/2025/03/IMG_4498-16x12.jpeg 16w, https://plainlii.com/wp-content/uploads/2025/03/IMG_4498.jpeg 2048w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption id="caption-attachment-1892" class="wp-caption-text">Humble brilliance and determination</figcaption></figure>
<h2><strong>Accelerating Drug and Materials Discovery</strong></h2>
<p>The conference featured discussions on AI-accelerated materials discovery, with a focus on batteries and drug development. Chao-Chan Hu of SES AI demonstrated how his company is using AI to revolutionize battery development by mapping a &#8220;molecular universe&#8221; of 100 million potential molecules. Here’s an illustration of the molecular universe they are exploring.</p>
<figure id="attachment_1890" aria-describedby="caption-attachment-1890" style="width: 300px" class="wp-caption alignnone"><img loading="lazy" decoding="async" class="wp-image-1890 size-medium" src="https://plainlii.com/wp-content/uploads/2025/03/IMG_4518-300x225.jpeg" alt="strokes of color in light purple and neon green" width="300" height="225" srcset="https://plainlii.com/wp-content/uploads/2025/03/IMG_4518-300x225.jpeg 300w, https://plainlii.com/wp-content/uploads/2025/03/IMG_4518-1024x768.jpeg 1024w, https://plainlii.com/wp-content/uploads/2025/03/IMG_4518-768x576.jpeg 768w, https://plainlii.com/wp-content/uploads/2025/03/IMG_4518-1536x1152.jpeg 1536w, https://plainlii.com/wp-content/uploads/2025/03/IMG_4518-16x12.jpeg 16w, https://plainlii.com/wp-content/uploads/2025/03/IMG_4518.jpeg 2048w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption id="caption-attachment-1890" class="wp-caption-text">Map of potential battery molecules</figcaption></figure>
<p>Using AI, SES AI reduced computational time for molecular property analysis from over 8,000 years (using CPUs) to just 2 months (using H100 GPUs and AI methods). This acceleration has allowed them to discover new molecules that significantly improve battery performance, enabling higher energy density batteries for electric vehicles and robots.</p>
<p>In drug discovery, presenters highlighted how NVIDIA&#8217;s BioNeMo framework is being used to accelerate research for diseases like amyotrophic lateral sclerosis (ALS). By leveraging generative AI techniques, researchers can reduce the drug discovery phase by up to 50%, creating new opportunities for treating challenging conditions.</p>
<h2><strong>Cell-free DNA and Cancer Diagnosis</strong></h2>
<p>Advances in cell-free DNA analysis for cancer detection were presented as a promising approach for early diagnosis. As tumors die, they shed DNA into the bloodstream, which can be isolated and analyzed using sequencing techniques to identify mutations.</p>
<p>This method allows for more regular monitoring of cancer dynamics, potentially detecting recurrence earlier than traditional methods. Speakers emphasized the importance of making this technology more accessible by developing efficient computational methods for processing the complex genomic data involved.</p>
<h1><strong>Robotics for Everyday Tasks</strong></h1>
<p>GTC 2025 dedicated significant attention to robotics, particularly how physical AI is enabling a new generation of robots that can assist in everyday tasks. Jensen Huang introduced the concept of &#8220;physical AI&#8221; as AI that understands the physical world, including concepts like friction, inertia, cause and effect, and object permanence.</p>
<h2><strong>NVIDIA Isaac Groot N1</strong></h2>
<p>A key announcement was NVIDIA&#8217;s Isaac Groot N1, a generalist foundation model for humanoid robots. Built on synthetic data generation and learning in simulation, Isaac Groot features a dual system architecture for both fast and slow thinking, allowing it to generalize across multiple embodiments and tasks.</p>
<p>The platform utilizes NVIDIA&#8217;s Omniverse and Cosmos technologies to create detailed simulations for robot training. By learning in virtual environments before deployment in the real world, robots can develop advanced capabilities while minimizing physical testing risks.</p>
<h2><strong>Social Robotics and Human-Robot Interaction</strong></h2>
<p>Professor Oya Asadikutan from King&#8217;s College London presented her work on social robotics and human-centered AI. Her research focuses on developing algorithms that enable robots to interact seamlessly with humans and their environment.</p>
<p>&#8220;Human behavior is rich, diverse, and often unpredictable,&#8221; Asadikutan noted. &#8220;To tackle these challenges, we must integrate multiple fields, not just robotics and AI, but also social sciences, such as behavioral psychology and ethics.&#8221;</p>
<p>One project from her lab focuses on teaching robots unspoken social rules, such as not interrupting conversations between humans. Her team developed an egocentric dataset to capture conversational groups from a robot&#8217;s perspective and created a graph neural network-based method for detecting these groups. This approach was integrated with reinforcement learning to enable robots to exhibit advanced social awareness.</p>
<p>Asadikutan emphasized the potential applications of such robots in supporting individuals with limited mobility and reducing anxiety in pediatric care settings. Her work demonstrates how multidisciplinary approaches combining AI, robotics, and social sciences can create more effective and acceptable robotic assistants.</p>
<h2><strong>Disney&#8217;s Robotic Character Platform</strong></h2>
<p>The conference included a session on Disney&#8217;s robotic character platform, which is redefining entertainment robotics from the ground up. Disney is building artist-centric tools that provide creative control of dynamic character motion, enabling the rapid design and deployment of animated robotic characters.</p>
<p>By combining advanced robotics with AI-driven motion planning, Disney aims to create more engaging and realistic character interactions for visitors. This application demonstrates how robotics is extending beyond industrial and healthcare settings into entertainment and storytelling.</p>
<h1><strong>Challenges in AI Data and Training</strong></h1>
<p>Throughout the conference, speakers addressed the fundamental challenges in AI development, particularly regarding data acquisition and training methodologies.</p>
<h2><strong>The Data Problem</strong></h2>
<p>As Jensen Huang highlighted, AI is a data-driven approach that requires digital experience to learn from. However, obtaining sufficient high-quality data remains a significant challenge. Speakers discussed various approaches to addressing this issue:</p>
<ol>
<li><strong>Data curation and preprocessing</strong>: NVIDIA&#8217;s NeMo Curator for non-English languages was presented as a tool for creating high-quality text corpora for languages like Spanish and French, addressing the challenge of limited or imbalanced datasets.</li>
<li><strong>Synthetic data generation</strong>: Beyond autonomous vehicles, synthetic data generation was shown to benefit fields ranging from medical imaging to robotics, creating diverse training scenarios without real-world limitations.</li>
<li><strong>Privacy-preserving techniques</strong>: Federated learning for medical data was discussed as a way to leverage distributed datasets while maintaining privacy, an essential consideration for sensitive data.</li>
</ol>
<h2><strong>Training Without Human in the Loop</strong></h2>
<p>The second major challenge identified was training models without heavy reliance on human feedback. Huang emphasized the need for AI to learn at &#8220;superhuman rates&#8221; that exceed what human supervision can provide.</p>
<p>Reinforcement learning emerged as a key approach to this challenge. By setting up environments where AI can learn through trial and error with automated feedback, models can generate their own training data through exploration. This approach has proven particularly effective in robotics and reasoning tasks.</p>
<p>Chain-of-thought reasoning was highlighted as a significant advance in AI capabilities. Models are now being trained to break down problems into steps, apply different approaches, and verify their own answers. This self-verification process improves reliability while reducing dependence on human evaluation.</p>
<h1><strong>AI Infrastructure and Computing Advancements</strong></h1>
<p>A substantial portion of the conference focused on the infrastructure required to support the next generation of AI models.</p>
<h2><strong>NVIDIA&#8217;s AI Infrastructure Roadmap</strong></h2>
<p>Jensen Huang unveiled NVIDIA&#8217;s roadmap for AI infrastructure, showcasing a systematic approach to scaling up computing capabilities before scaling out. Key announcements included:</p>
<ol>
<li><strong>Blackwell architecture</strong>: Currently in full production, the Blackwell architecture represents a fundamental transition in computer design, with liquid cooling and disaggregated MVLink for improved performance.</li>
<li><strong>Blackwell Ultra</strong>: Coming in the second half of 2025, offering 1.5x more FLOPS, memory, and bandwidth compared to Blackwell.</li>
<li><strong>Vera Rubin</strong>: Planned for the second half of 2026, featuring NVLink 144, a new CPU with twice the performance of Grace, and new GPU, networking, and memory technologies.</li>
<li><strong>Rubin Ultra</strong>: Scheduled for the second half of 2027, with NVLink 576 enabling extreme scale-up capabilities with 15 exaflops of performance and 4.6 petabytes per second of bandwidth.</li>
</ol>
<h2><strong>Energy Efficiency in AI Computing</strong></h2>
<p>Energy efficiency emerged as a critical factor in AI infrastructure development. Huang stated, &#8220;Energy is our most important commodity; everything is related ultimately to energy,&#8221; highlighting how data center revenues are fundamentally power-limited.</p>
<p>NVIDIA demonstrated that their Blackwell architecture is 25 times more efficient than Hopper in terms of tokens per second per megawatt, representing a significant advance in computational efficiency. The introduction of four-bit floating-point precision further improves energy efficiency by reducing the power needed for model operations.</p>
<p>The company also unveiled silicon photonic technology based on micro ring resonator modulators to dramatically reduce the power consumption of data center networking. This innovation could save tens of megawatts in large-scale AI deployments by eliminating traditional transceivers.</p>
<h1><strong>In Closing</strong></h1>
<p>GTC 2025 painted a picture of an AI industry at an inflection point, with foundational advances in tokens, synthetic data generation, medical and materials research applications, and robotics: these developments are creating a future where AI can reason, understand the physical world, and assist humans in increasingly sophisticated ways. I was skeptical about this until I learnt to see tokens as semantic units in AI&#8211;hence my coined phrase: &#8220;synthetic evolution.&#8221;</p>
<p>The integration of AI across industries is accelerating, from autonomous vehicles and healthcare to entertainment and enterprise computing. As Jensen Huang noted, &#8220;AI and machine learning have reinvented the entire computing stack,&#8221; fundamentally changing how we approach computation and problem-solving.</p>
<p>While significant challenges remain, particularly in energy efficiency and training methodologies, the technologies showcased at GTC 2025 demonstrate the industry&#8217;s commitment to addressing these obstacles. With continued advances in synthetic data generation, model architecture, and computing infrastructure, the path toward more capable AI systems seems to be taking us to a new frontier.</p>
<p>The conference&#8217;s emphasis on interdisciplinary collaboration—bringing together experts from computer science, medicine, materials science, robotics, and social sciences—highlights the importance of diverse perspectives in advancing AI capabilities. As these fields continue to converge, we can expect even more transformative applications that enhance human potential and address global challenges.</p>
<p>&nbsp;</p>]]></content:encoded>
					
					<wfw:commentrss>https://plainlii.com/es/2025/03/24/nvidiagtc25aiconferencewhatsnextinai/feed/</wfw:commentrss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Digital Accessibility: WCAG and Section 508</title>
		<link>https://plainlii.com/es/2025/03/01/digital-accessibility-wcag-and-section-508/</link>
					<comments>https://plainlii.com/es/2025/03/01/digital-accessibility-wcag-and-section-508/#respond</comments>
		
		<dc:creator><![CDATA[newemage]]></dc:creator>
		<pubdate>Sat, 01 Mar 2025 01:33:59 +0000</pubdate>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[accessibility]]></category>
		<guid ispermalink="false">https://plainlii.com/?p=1874</guid>

					<description><![CDATA[Digital Accessibility Blog Digital accessibility means opening doors to digital content to people with disabilities. To us it is a subset of accessibility in the general sense: supporting everyone’s right to access and understand information. Digital accessibility requirements are developed by the World Wide Web Consortium and inform laws and regulations like Section 508 of [&#8230;]]]></description>
										<content:encoded><![CDATA[<h1>Digital Accessibility Blog</h1>
<p>Digital accessibility means opening doors to digital content to people with disabilities. To us it is a subset of accessibility in the general sense: supporting everyone’s right to access and understand information.</p>
<p>Digital accessibility requirements are developed by the World Wide Web Consortium and inform laws and regulations like <a href="https://www.section508.gov/" target="_blank" rel="noopener">Section 508</a> of the Rehabilitation Act and the guidelines  of the <a href="https://www.access-board.gov/" target="_blank" rel="noopener">U.S. Access Board</a>.</p>
<p>At Plainlii, as part of our plain language practice—including consulting and training—we are well versed in accessibility requirements, needs, and compliance. We work with trained professionals who are not only expert practitioners but also contribute to guidelines for ensuring accessible access to information. Plainlii’s President, Romina Marazzato Sparano, is a contributor to U.S. Access Board (federal agency that advances accessibility) guidelines such as <a href="https://www.access-board.gov/tad/radx/" target="_blank" rel="noopener">U.S. Access Board’s Best Practices for the Design of Accessible COVID-19 Home Tests</a>.</p>
<p>We also implement data driven information visualization practices to ensure perceptual quality for all readers and use a variety of tools and strategies for accessibility verification in each of the categories from the WCAG guidelines.</p>
<h2>Let&#8217;s take a closer look at the accessibility features and what organizations need to consider.</h2>
<h3><strong>Perceivable</strong></h3>
<p>The perceivable principle ensures that users can detect and interact with content through different senses. This includes making information presentable in ways that can be perceived by users with visual, auditory, or cognitive disabilities. Proper document formatting, sensible color choices, and clear alternatives for non-text content are essential for perceivability. A few key items include:</p>
<ul>
<li>Screen Reader Testing with NVDA</li>
<li>Color and Contrast Checkers from the aXe DevTools kit</li>
<li>Alt Text review for completeness</li>
<li>Multimedia checks like Adobe Acrobat Pro DC&#8217;s Accessibility Checker</li>
</ul>
<h3><strong>Operable</strong></h3>
<p>Operability focuses on ensuring users can navigate and interact with all elements of digital content. This includes providing multiple ways to locate information, making functionality available via keyboard, and ensuring content doesn&#8217;t trigger seizures or physical discomfort. Thoughtful structure and logical reading flow are critical components of operability. Important items in this category include:</p>
<p>&nbsp;</p>
<ul>
<li>Keyboard Navigation Testing</li>
<li>Motion &amp; Timing Checks</li>
</ul>
<h3><strong>Understandable</strong></h3>
<p>The understandable principle addresses how clearly content can be comprehended by all users. This involves using plain language, consistent navigation, and clear instructions. Readable text, predictable functionality, and error prevention mechanisms help users avoid and correct mistakes while interacting with digital content. Some essential aspects to consider here are:</p>
<ul>
<li>Alt Text and content review for readability with proprietary tool (PLAIn Index) and readability formulas including Flesch and Kincaid&#8217;s formulas</li>
<li>Language validation with Adobe InDesign&#8217;s Language Tools</li>
</ul>
<h3><strong>Robust</strong></h3>
<p>Robustness ensures content remains accessible as technologies evolve. This includes creating content that&#8217;s compatible with current and future user tools, following web standards, and providing sufficient information for assistive technologies to accurately present the content. Thorough testing across platforms and formats helps maintain accessibility over time. A few key items include:</p>
<ul>
<li>Code Validation of URIs with W3C CSS Validator</li>
<li>Cross-Platform Testing with aXe Core</li>
<li>PDFs specific verification with Adobe Full Check</li>
</ul>
<h2>A Case Study: Alt Text</h2>
<p>When creating alt text for images, content creators must navigate the spectrum between interpretive and descriptive approaches. Descriptive alt text focuses on objectively capturing what appears in an image—colors, shapes, people, objects, and their spatial relationships—without subjective analysis. For example, &#8220;A red apple sitting on a wooden table&#8221; provides clear visual information. In contrast, interpretive alt text conveys the image&#8217;s meaning, purpose, or emotional impact, such as &#8220;Fresh produce symbolizing healthy eating choices.&#8221;</p>
<p>The best approach often depends on context: educational or technical materials typically benefit from precise descriptions, while marketing content might require interpretation of mood or brand messaging. Effective alt text strikes a balance—providing enough concrete details for users with visual impairments to understand what they&#8217;re missing, while also conveying the image&#8217;s significance to the surrounding content.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-1876" src="https://plainlii.com/wp-content/uploads/2025/03/Apple-300x225.png" alt="Descriptive: A red apple sitting on a wooden table. Interpretive: Fresh produce symbolizing healthy eating choices." width="300" height="225" srcset="https://plainlii.com/wp-content/uploads/2025/03/Apple-300x225.png 300w, https://plainlii.com/wp-content/uploads/2025/03/Apple-1024x768.png 1024w, https://plainlii.com/wp-content/uploads/2025/03/Apple-768x576.png 768w, https://plainlii.com/wp-content/uploads/2025/03/Apple-1536x1152.png 1536w, https://plainlii.com/wp-content/uploads/2025/03/Apple-2048x1536.png 2048w, https://plainlii.com/wp-content/uploads/2025/03/Apple-16x12.png 16w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p>Deciding betwee<span style="font-size: 16px;">n these approaches is key when addressing safety information, as shown in the following example about an airplane oxygen mask.</span></p>
<figure id="attachment_1877" aria-describedby="caption-attachment-1877" style="width: 300px" class="wp-caption alignnone"><img loading="lazy" decoding="async" class="size-medium wp-image-1877" src="https://plainlii.com/wp-content/uploads/2025/03/Mask-1-300x225.png" alt="Descriptive alt text: &quot;Yellow airplane oxygen mask hanging from overhead panel with elastic strap and clear tube connection.&quot;Interpretive alt text: &quot; Airplane oxygen mask deployed and ready for use during an emergency involving cabin depressurization.&quot; " width="300" height="225" srcset="https://plainlii.com/wp-content/uploads/2025/03/Mask-1-300x225.png 300w, https://plainlii.com/wp-content/uploads/2025/03/Mask-1-1024x768.png 1024w, https://plainlii.com/wp-content/uploads/2025/03/Mask-1-768x576.png 768w, https://plainlii.com/wp-content/uploads/2025/03/Mask-1-1536x1152.png 1536w, https://plainlii.com/wp-content/uploads/2025/03/Mask-1-2048x1536.png 2048w, https://plainlii.com/wp-content/uploads/2025/03/Mask-1-16x12.png 16w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption id="caption-attachment-1877" class="wp-caption-text">Airplane Oxygen Mask</figcaption></figure>
<p>Organizations should develop consistent guidelines that help content creators decide when to be primarily descriptive versus when interpretation adds necessary value for full comprehension.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>]]></content:encoded>
					
					<wfw:commentrss>https://plainlii.com/es/2025/03/01/digital-accessibility-wcag-and-section-508/feed/</wfw:commentrss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Future of Being Human: 5 New Trends Shaping Our Journey from Enhanced Cognition to Augmented Reality</title>
		<link>https://plainlii.com/es/2024/10/02/future-of-being-human-enahnced-cognition-augmented-reality/</link>
					<comments>https://plainlii.com/es/2024/10/02/future-of-being-human-enahnced-cognition-augmented-reality/#respond</comments>
		
		<dc:creator><![CDATA[newemage]]></dc:creator>
		<pubdate>Wed, 02 Oct 2024 22:25:41 +0000</pubdate>
				<category><![CDATA[Uncategorized]]></category>
		<guid ispermalink="false">https://plainlii.com/?p=1853</guid>

					<description><![CDATA[The Future of Being Human: 5 Trends Shaping Our Journey In an era of rapid technological advancement, it&#8217;s crucial to stay informed about innovations that will shape our future. Recently, I’ve been thinking a lot about areas where technology is poised to revolutionize our lives, work, and society as a whole. Here are five ley [&#8230;]]]></description>
										<content:encoded><![CDATA[<h1>The Future of Being Human: 5 Trends Shaping Our Journey</h1>
<p>In an era of rapid technological advancement, it&#8217;s crucial to stay informed about innovations that will shape our future. Recently, I’ve been thinking a lot about areas where technology is poised to revolutionize our lives, work, and society as a whole. Here are five ley points:</p>
<h2>1. Enhanced Cognition: The Merger of Mind and Machine</h2>
<p>The integration of digital and biological neural networks represents a frontier in cognitive enhancement. This fusion can dramatically amplify human cognitive capabilities, potentially revolutionizing how we process information, learn, and solve problems.<br />
While the prospects are exciting, we must carefully consider that:<br />
• The integration of digital and biological neural networks remains a challenge [1]<br />
• Potential for enhanced information processing needs to be accompanied by enhanced emotional processing (what’s the saying… Anyone can be a boss, but not everyone is a <a href="https://plainlii.com/es/2024/04/10/leadership-practices/">leader</a>)<br />
• The pillars of learning as set out Stanislas Dehaene [2] would still apply (We can’t really become helicopter pilots like Trinity in The Matrix [3], at least yet)<br />
Understanding this trend could open new avenues for exploring themes of human potential and the ethics of cognitive enhancement in our work.</p>
<h2>2. Real-Time Language Translation: Building Bridges Across Global Barriers</h2>
<p>Hey, I’m a translator and I do believe AI can totally lower barriers for practical aspects.<br />
The caveat here is that different grammars literally see the world differently [4], and while translation can always occur, it is incredible how much more you can understand from parsing the world through different grammars.</p>
<h2>3. Augmented Reality and the Metaverse: Redefining Human Experience?</h2>
<p>The rise of augmented reality (AR) and metaverse technologies promises to revolutionize entertainment, education, and social interaction. These immersive technologies offer new platforms for storytelling, content delivery, collaboration, and learning.<br />
We need to balance virtual experiences with real-world engagement (I’m thinking of the profoundly sad story here of the Korean baby left to die [5] by parents who lost track of time in hyperspace.<br />
I’m not a radical embodiment [6] person, but embodiment [7] is a huge factor on how we do, perceive, and value life.</p>
<h2>4. Advanced Health Monitoring and Medical Interventions</h2>
<p>The intersection of AI and healthcare is ushering in an era of personalized, proactive health management. From early disease detection to tailored treatment plans, technology is reshaping the healthcare landscape. I’ve been impressed with Caristo diagnosis of heart disease [8] and the Chilean project for over-the-phone detection of COVID-19[9].<br />
Of course, access to care for an open source diagnosis approach might require redefining ideas about our economy though, at least the current conceptualization of growth.</p>
<h2>5. Human-AI Collaboration and Collective Intelligence</h2>
<p>The synergy between human creativity and AI capabilities is giving rise to new forms of problem-solving and decision-making. This collaboration is already helping address complex global challenges and drive innovation across industries [10].<br />
I just finished Yuval Noah Harari&#8217; Nexus [11] with some interesting references to human networks and intersubjective reality—despite being a somewhat repetitive historical read for me. (By the way, two amazingly told stories about much of the same events Harari covers: Irene Vallejo’s El Infinito en un Junco [12], and Imperiofobia y leyenda negra: Roma, Rusia, Estados Unidos y el Imperio español [13], both available in different languages<br />
This collaboration needs to be guided by human-driven principles like human rights, children rights, democracy to just name a few of the biggies…</p>
<h2>Conclusion</h2>
<p>Lot’s to think about. As responsible global citizens, we must actively participate in discussions about the development and deployment of these technologies, advocating for policies and practices that prioritize human well-being, privacy, and social justice in our increasingly digital world.</p>
<h2>References:</h2>
<p>1. Nature: &#8220;The challenge of linking the human brain to computers&#8221;<br />
<a href="https://www.nature.com/articles/d41586-019-02214-2" target="_blank" rel="noopener">https://www.nature.com/articles/d41586-019-02214-2</a><br />
2. Stanislas Dehaene: &#8220;How We Learn: Why Brains Learn Better Than Any Machine&#8230;for Now&#8221;<br />
<a href="https://www.penguinrandomhouse.com/books/566214/how-we-learn-by-stanislas-dehaene/" target="_blank" rel="noopener">https://www.penguinrandomhouse.com/books/566214/how-we-learn-by-stanislas-dehaene/</a><br />
3. IMDb: &#8220;The Matrix&#8221;<br />
<a href="https://www.imdb.com/title/tt0133093/[1]" target="_blank" rel="noopener">https://www.imdb.com/title/tt0133093/[1]</a><br />
4. Guy Deutscher: &#8220;Through the Language Glass: Why the World Looks Different in Other Languages&#8221;<br />
<a href="https://www.amazon.com/Through-Language-Glass-Different-Languages/dp/0312610491" target="_blank" rel="noopener">https://www.amazon.com/Through-Language-Glass-Different-Languages/dp/0312610491</a><br />
5. BBC News: &#8220;South Korea: Parents of &#8216;neglected&#8217; baby girl arrested&#8221;<br />
<a href="https://www.bbc.com/news/world-asia-39371113" target="_blank" rel="noopener">https://www.bbc.com/news/world-asia-39371113</a><br />
6. Stanford Encyclopedia of Philosophy: &#8220;Embodied Cognition&#8221;<br />
<a href="https://plato.stanford.edu/entries/embodied-cognition/" target="_blank" rel="noopener">https://plato.stanford.edu/entries/embodied-cognition/</a><br />
7. Maurice Merleau-Ponty: &#8220;Phenomenology of Perception&#8221;<br />
<a href="https://www.routledge.com/Phenomenology-of-Perception/Merleau-Ponty/p/book/9780415834339" target="_blank" rel="noopener">https://www.routledge.com/Phenomenology-of-Perception/Merleau-Ponty/p/book/9780415834339</a><br />
8. Caristo Diagnostics: &#8220;AI-Powered Cardiovascular Disease Risk Prediction&#8221;<br />
<a href="https://www.caristodiagnostics.com/" target="_blank" rel="noopener">https://www.caristodiagnostics.com/</a><br />
9. MIT Technology Review: &#8220;AI could help diagnose COVID-19 by listening to your cough&#8221;<br />
<a href="https://www.technologyreview.com/2020/10/08/1009650/ai-could-help-diagnose-covid-19-by-listening-to-your-cough/" target="_blank" rel="noopener">https://www.technologyreview.com/2020/10/08/1009650/ai-could-help-diagnose-covid-19-by-listening-to-your-cough/</a><br />
10. McKinsey &amp; Company: &#8220;The state of AI in 2021&#8221;<br />
<a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/global-survey-the-state-of-ai-in-2021" target="_blank" rel="noopener">https://www.mckinsey.com/capabilities/quantumblack/our-insights/global-survey-the-state-of-ai-in-2021</a><br />
11. Yuval Noah Harari: &#8220;Homo Deus: A Brief History of Tomorrow&#8221;</p>
<blockquote class="wp-embedded-content" data-secret="SG1IhI6XBi"><p><a href="https://www.ynharari.com/book/homo-deus/" target="_blank" rel="noopener">Homo Deus</a></p></blockquote>
<p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted"  title="&#8220;Homo Deus&#8221; &#8212; Yuval Noah Harari" src="https://www.ynharari.com/book/homo-deus/embed/#?secret=w0umCuDEPJ#?secret=SG1IhI6XBi" data-secret="SG1IhI6XBi" width="600" height="338" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe><br />
12. Irene Vallejo: &#8220;El Infinito en un Junco&#8221;<br />
<a href="https://www.penguinrandomhouse.com/books/673282/infinity-in-a-reed-by-irene-vallejo/" target="_blank" rel="noopener">https://www.penguinrandomhouse.com/books/673282/infinity-in-a-reed-by-irene-vallejo/</a><br />
13. María Elvira Roca Barea: &#8220;Imperiofobia y leyenda negra: Roma, Rusia, Estados Unidos y el Imperio español&#8221;<br />
<a href="https://www.amazon.com/Imperiofobia-leyenda-negra-Estados-espa%C3%B1ol/dp/8416854238" target="_blank" rel="noopener">https://www.amazon.com/Imperiofobia-leyenda-negra-Estados-espa%C3%B1ol/dp/8416854238</a></p>]]></content:encoded>
					
					<wfw:commentrss>https://plainlii.com/es/2024/10/02/future-of-being-human-enahnced-cognition-augmented-reality/feed/</wfw:commentrss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>