<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Gadget World | Site-Wide Activity</title>
	<link>https://gadgets.indirootsandroutes.com/activity-2/</link>
	<atom:link href="https://gadgets.indirootsandroutes.com/activity-2/feed" rel="self" type="application/rss+xml" />
	<description>Activity feed for the entire site.</description>
	<lastBuildDate>Sun, 12 Apr 2026 12:07:25 +0000</lastBuildDate>
	<generator>https://buddypress.org/?v=</generator>
	<language>en-US</language>
	<ttl>30</ttl>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>2</sy:updateFrequency>
	
						<item>
				<guid isPermaLink="false">ddd951bca3155a6ac2cf846f064f2cf7</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9752</link>
				<pubDate>Sun, 12 Apr 2026 12:07:25 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/glm-5-1-architecture-benchmarks-capabilities-how-to-use-it/" rel="nofollow ugc">GLM-5.1: Architecture, Benchmarks, Capabilities &amp; How to Use It</a></strong>Z.ai is out with its next-generation flagship AI model and has named it GLM-5.1. With <a href="https://gadgets.indirootsandroutes.com/glm-5-1-architecture-benchmarks-capabilities-how-to-use-it/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">d703c635b87357688ae1b60a41a3dfa6</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9751</link>
				<pubDate>Sun, 12 Apr 2026 12:07:20 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/minimax-just-open-sourced-minimax-m2-7-a-self-evolving-agent-model-that-scores-56-22-on-swe-pro-and-57-0-on-terminal-bench-2/" rel="nofollow ugc">MiniMax Just Open Sourced MiniMax M2.7: A Self-Evolving Agent Model that Scores 56.22% on SWE-Pro and 57.0% on Terminal Bench 2</a></strong>MiniMax has <a href="https://gadgets.indirootsandroutes.com/minimax-just-open-sourced-minimax-m2-7-a-self-evolving-agent-model-that-scores-56-22-on-swe-pro-and-57-0-on-terminal-bench-2/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">72e0405aaad162e71c85bb29e64f7d60</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9750</link>
				<pubDate>Sun, 12 Apr 2026 12:01:07 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/liquid-ai-releases-lfm2-5-vl-450m-a-450m-parameter-vision-language-model-with-bounding-box-prediction-multilingual-support-and-sub-250ms-edge-inference/" rel="nofollow ugc">Liquid AI Releases LFM2.5-VL-450M: a 450M-Parameter Vision-Language Model with Bounding Box Prediction, Multilingual Support, and Sub-250ms Edge Inference</a></strong><a href="https://gadgets.indirootsandroutes.com/liquid-ai-releases-lfm2-5-vl-450m-a-450m-parameter-vision-language-model-with-bounding-box-prediction-multilingual-support-and-sub-250ms-edge-inference/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/HFZP3rGWoAAGZAm-1-1024x768-1.jpeg" /></a> Liquid AI just released LFM2.5-VL-450M, an updated version of its earlier LFM2-VL-450M vision-language model. The new release introduces bounding box prediction, improved instruction following, expanded multilingual understanding, and function calling support — all within a 450M-parameter footprint designed to run directly on edge hardware ranging from embedded AI modules like NVIDIA Jetson Orin, to mini-PC APUs like AMD Ryzen AI Max+ 395, to flagship phone SoCs like the Snapdragon 8 Elite inside the Samsung S25 Ultra.        What is a Vision-Language Model and Why Model Size Matters    Before going deeper, it helps to understand what a vision-language model (VLM) is. A VLM is a model that can process both images and text together — you can send it a photo and ask questions about it in natural language, and it will respond. Most large VLMs require substantial GPU memory and cloud infrastructure to run. That’s a problem for real-world deployment scenarios like warehouse robots, smart glasses, or retail shelf cameras, where compute is limited and latency must be low.    LFM2.5-VL-450M is Liquid AI’s answer to this constraint: a model small enough to fit on edge hardware while still supporting a meaningful set of vision and language capabilities.    Architecture and Training    LFM2.5-VL-450M uses LFM2.5-350M as its language model backbone and SigLIP2 NaFlex shape-optimized 86M as its vision encoder. The context window is 32,768 tokens with a vocabulary size of 65,536.    For image handling, the model supports native resolution processing up to 512×512 pixels without upscaling, preserves non-standard aspect ratios without distortion, and uses a tiling strategy that splits large images into non-overlapping 512×512 patches while including thumbnail encoding for global context. The thumbnail encoding is important: without it, tiling would give the model only local patches with no sense of the overall scene. At inference time, users can tune the maximum image tokens and tile count for a speed/quality tradeoff without retraining, which is useful when deploying across hardware with different compute budgets.    The recommended generation parameters from Liquid AI are temperature=0.1, min_p=0.15, and repetition_penalty=1.05 for text, and min_image_tokens=32, max_image_tokens=256, and do_image_splitting=True for vision inputs.    On the training side, Liquid AI scaled pre-training from 10T to 28T tokens compared to LFM2-VL-450M, followed by post-training using preference optimization and reinforcement learning to improve grounding, instruction following, and overall reliability across vision-language tasks.    New Capabilities Over LFM2-VL-450M    The most significant addition is bounding box prediction. LFM2.5-VL-450M scored 81.28 on RefCOCO-M, up from zero on the previous model. RefCOCO-M is a visual grounding benchmark that measures how accurately a model can locate an object in an image given a natural language description. In practice, the model outputs structured JSON with normalized coordinates identifying where objects are in a scene — not just describing what is there, but also locating it. This is meaningfully different from pure image captioning and makes the model directly usable in pipelines that need spatial outputs.    Multilingual support also improved substantially. MMMB scores improved from 54.29 to 68.09, covering Arabic, Chinese, French, German, Japanese, Korean, Portuguese, and Spanish. This is relevant for global deployments where local-language prompts must be understood alongside visual inputs, without needing separate localization pipelines.    Instruction following improved as well. MM-IFEval scores went from 32.93 to 45.00, meaning the model more reliably adheres to explicit constraints given in a prompt — for example, responding in a particular format or restricting output to specific fields.     Function calling support for text-only input was also added, measured by BFCLv4 at 21.08, a capability the previous model did not include. Function calling allows the model to be used in agentic pipelines where it needs to invoke external tools — for instance, calling a weather API or triggering an action in a downstream system.  Benchmark Performance    Across vision benchmarks evaluated using VLMEvalKit, LFM2.5-VL-450M outperforms both LFM2-VL-450M and SmolVLM2-500M on most tasks. Notable scores include 86.93 on POPE, 684 on OCRBench, 60.91 on MMBench (dev en), and 58.43 on RealWorldQA.    Two benchmark gains stand out beyond the headline numbers. MMVet — which tests more open-ended visual understanding — improved from 33.85 to 41.10, a substantial relative gain. CountBench, which evaluates the model’s ability to count objects in a scene, improved from 47.64 to 73.31, one of the largest relative improvements in the table. InfoVQA held roughly flat at 43.02 versus 44.56 on the prior model.    On language-only benchmarks, IFEval improved from 51.75 to 61.16 and Multi-IF from 26.21 to 34.63. The model does not outperform on all tasks — MMMU (val) dropped slightly from 34.44 to 32.67 — and Liquid AI notes the model is not well-suited for knowledge-intensive tasks or fine-grained OCR.    Edge Inference Performance    LFM2.5-VL-450M with Q4_0 quantization runs across the full range of target hardware, from embedded AI modules like Jetson Orin to mini-PC APUs like Ryzen AI Max+ 395 to flagship phone SoCs like Snapdragon 8 Elite.    The latency numbers tell a clear story. On Jetson Orin, the model processes a 256×256 image in 233ms and a 512×512 image in 242ms — staying well under 250ms at both resolutions. This makes it fast enough to process every frame in a 4 FPS video stream with full vision-language understanding, not just detection. On Samsung S25 Ultra, latency is 950ms for 256×256 and 2.4 seconds for 512×512. On AMD Ryzen AI Max+ 395, it is 637ms for 256×256 and 944ms for 512×512 — under one second for the smaller resolution on both consumer devices, which keeps interactive applications responsive.    Real-World Use Cases    LFM2.5-VL-450M is especially well suited to real-world deployments where low latency, compact structured outputs, and efficient semantic reasoning matter most, including settings where offline operation or on-device processing is important for privacy.    In industrial automation, compute-constrained environments such as passenger vehicles, agricultural machinery, and warehouses often limit perception models to bounding-box outputs. LFM2.5-VL-450M goes further, providing grounded scene understanding in a single pass — enabling richer outputs for settings like warehouse aisles, including worker actions, forklift movement, and inventory flow — while still fitting existing edge hardware like a Jetson Orin.    For wearables and always-on monitoring, devices such as smart glasses, body-worn assistants, dashcams, and security or industrial monitors cannot afford large perception stacks or constant cloud streaming. An efficient VLM can produce compact semantic outputs locally, turning raw video into useful structured understanding while keeping compute demands low and preserving privacy.    In retail and e-commerce, tasks like catalog ingestion, visual search, product matching, and shelf compliance require more than object detection, but richer visual understanding is often too expensive to deploy at scale. LFM2.5-VL-450M makes structured visual reasoning practical for these workloads.    Key Takeaways     LFM2.5-VL-450M adds bounding box prediction for the first time, scoring 81.28 on RefCOCO-M versus zero on the previous model, enabling the model to output structured spatial coordinates for detected objects — not just describe what it sees.    Pre-training was scaled from 10T to 28T tokens, combined with post-training via preference optimization and reinforcement learning, driving consistent benchmark gains across vision and language tasks over LFM2-VL-450M.    The model runs on edge hardware with sub-250ms latency, processing a 512×512 image in 242ms on NVIDIA Jetson Orin with Q4_0 quantization — fast enough for full vision-language understanding on every frame of a 4 FPS video stream without cloud offloading.    Multilingual visual understanding improved significantly, with MMMB scores rising from 54.29 to 68.09 across Arabic, Chinese, French, German, Japanese, Korean, Portuguese, and Spanish, making the model viable for global deployments without separate localization models.         Check out the Technical details and Model Weight. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.    Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us The post Liquid AI Releases LFM2.5-VL-450M: a 450M-Parameter Vision-Language Model with Bounding Box Prediction, Multilingual Support, and Sub-250ms Edge Inference appeared first on M <a href="https://gadgets.indirootsandroutes.com/liquid-ai-releases-lfm2-5-vl-450m-a-450m-parameter-vision-language-model-with-bounding-box-prediction-multilingual-support-and-sub-250ms-edge-inference/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">75dc29171a31944ea760df44168c9ac4</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9749</link>
				<pubDate>Sun, 12 Apr 2026 00:00:41 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/researchers-from-mit-nvidia-and-zhejiang-university-propose-triattention-a-kv-cache-compression-method-that-matches-full-attention-at-2-5x-higher-throughput/" rel="nofollow ugc">Researchers from MIT, NVIDIA, and Zhejiang University Propose TriAttention: A KV Cache Compression Method That Matches Full Attention at 2.5× Higher Throughput</a></strong><a href="https://gadgets.indirootsandroutes.com/researchers-from-mit-nvidia-and-zhejiang-university-propose-triattention-a-kv-cache-compression-method-that-matches-full-attention-at-2-5x-higher-throughput/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/Screenshot-2026-04-11-at-10115-PM-1.png" /></a> Long-chain reasoning is one of the most compute-intensive tasks in modern large language models. When a model like DeepSeek-R1 or Qwen3 works through a complex math problem, it can generate tens of thousands of tokens before arriving at an answer. Every one of those tokens must be stored in what is called the KV cache — a memory structure that holds the Key and Value vectors the model needs to attend back to during generation. The longer the reasoning chain, the larger the KV cache grows, and for many deployment scenarios, especially on consumer hardware, this growth eventually exhausts GPU memory entirely.    A team of researchers from MIT, NVIDIA, and Zhejiang University proposed a method called TriAttention that directly addresses this problem. On the AIME25 mathematical reasoning benchmark with 32K-token generation, TriAttention matches Full Attention accuracy while achieving 2.5× higher throughput or 10.7× KV memory reduction. Leading baselines achieve only about half the accuracy at the same efficiency level.  The Problem with Existing KV Cache Compression    To understand why TriAttention is important, it helps to understand the standard approach to KV cache compression. Most existing methods — including SnapKV, H2O, and R-KV — work by estimating which tokens in the KV cache are important and evicting the rest. Importance is typically estimated by looking at attention scores: if a key receives high attention from recent queries, it is considered important and kept.    The catch is that these methods operate in what the research team calls post-RoPE space. RoPE, or Rotary Position Embedding, is the positional encoding scheme used by most modern LLMs including Llama, Qwen, and Mistral. RoPE encodes position by rotating the Query and Key vectors in a frequency-dependent way. As a result, a query vector at position 10,000 looks very different from the same semantic query at position 100, because its direction has been rotated by the position encoding.    This rotation means that only the most recently generated queries have orientations that are ‘up to date’ for estimating which keys are important right now. Prior work has confirmed this empirically: increasing the observation window for importance estimation does not help — performance peaks at around 25 queries and declines after that. With such a tiny window, some keys that will become important later get permanently evicted.    This problem is especially acute for what the research team calls retrieval heads — attention heads whose function is to retrieve specific factual tokens from long contexts. The relevant tokens for a retrieval head can remain dormant for thousands of tokens before suddenly becoming essential to the reasoning chain. Post-RoPE methods, operating over a narrow observation window, see low attention on those tokens during the dormant period and permanently evict them. When the model later needs to recall that information, it is already gone, and the chain of thought breaks.    The Pre-RoPE Observation: Q/K Concentration    The key insight in TriAttention comes from looking at Query and Key vectors before RoPE rotation is applied — the pre-RoPE space. When the research team visualized Q and K vectors in this space, they found something consistent and striking: across the vast majority of attention heads and across multiple model architectures, both Q and K vectors cluster tightly around fixed, non-zero center points. The research team terms this property Q/K concentration, and measures it using the Mean Resultant Length R — a standard directional statistics measure where R → 1 means tight clustering and R → 0 means dispersion in all directions.    On Qwen3-8B, approximately 90% of attention heads exhibit R &gt; 0.95, meaning their pre-RoPE Q/K vectors are nearly perfectly concentrated around their respective centers. Critically, these centers are stable across different token positions and across different input sequences — they are an intrinsic property of the model’s learned weights, not a property of any particular input. The research team further confirm that Q/K concentration is domain-agnostic: measuring Mean Resultant Length across Math, Coding, and Chat domains on Qwen3-8B yields nearly identical values of 0.977–0.980.    This stability is what post-RoPE methods cannot exploit. RoPE rotation disperses these concentrated vectors into arc patterns that vary with position. But in pre-RoPE space, the centers remain fixed.    From Concentration to a Trigonometric Series    The research team then show mathematically that when Q and K vectors are concentrated around their centers, the attention logit — the raw score before softmax that determines how much a query attends to a key — simplifies dramatically. Substituting the Q/K centers into the RoPE attention formula, the logit reduces to a function that depends only on the Q-K distance (the relative positional gap between query and key), expressed as a trigonometric series:    logit(Δ)≈∑f‖q‾f‖‖k‾f‖⏟amplitudecos⁡(ωfΔ+ϕ‾f⏟phase)=∑f[afcos⁡(ωfΔ)+bfsin⁡(ωfΔ)] text{logit}(Delta) approx sum_{f} underbrace{|bar{q}_f| |bar{k}_f|}_{text{amplitude}} cos(omega_f Delta + underbrace{bar{phi}_f}_{text{phase}})  = sum_{f} [a_f cos(omega_f Delta) + b_f sin(omega_f Delta)]     Here, Δ is the positional distance, ωf are the RoPE rotation frequencies for each frequency band f, and the coefficients af and bf are determined by the Q/K centers. This series produces a characteristic attention-vs-distance curve for each head. Some heads prefer nearby keys (local attention), others prefer very distant keys (attention sinks). The centers, computed offline from calibration data, fully determine which distances are preferred.    The research team validated this experimentally across 1,152 attention heads in Qwen3-8B and across Qwen2.5 and Llama3 architectures. The Pearson correlation between the predicted trigonometric curve and the actual attention logits has a mean above 0.5 across all heads, with many heads achieving correlations of 0.6–0.9. The research team further validates this on GLM-4.7-Flash, which uses Multi-head Latent Attention (MLA) rather than standard Grouped-Query Attention — a meaningfully different attention architecture. On MLA, 96.6% of heads exhibit R &gt; 0.95, compared to 84.7% for GQA, confirming that Q/K concentration is not specific to one attention design but is a general property of modern LLMs.    How TriAttention Uses This    TriAttention is a KV cache compression method that uses these findings to score keys without needing any live query observations. The scoring function has two components:    The Trigonometric Series Score (Strig) uses the Q center computed offline and the actual cached key representation to estimate how much attention the key will receive, based on its positional distance from future queries. Because a key may be attended to by queries at many future positions, TriAttention averages this score over a set of future offsets using geometric spacing.    Strig(k,Δ)=∑f‖𝔼[qf]‖⋅‖kf‖⋅cos⁡(ωfΔ+ϕf)S_{text{trig}}(k, Delta) = sum_{f} |mathbb{E}[q_f]| cdot |k_f| cdot cos(omega_f Delta + phi_f)    The Norm-Based Score (Snorm) handles the minority of attention heads where Q/K concentration is lower. It weights each frequency band by the expected query norm contribution, providing complementary information about token salience beyond distance preference alone.    Snorm(0)(k)=∑f𝔼[‖qf‖]⋅‖kf‖S_{text{norm}}^{(0)}(k) = sum_{f} mathbb{E}[|q_f|] cdot |k_f|    The two scores are combined using the Mean Resultant Length R as an adaptive weight: when concentration is high, Strig dominates; when concentration is lower, Snorm contributes more. Every 128 generated tokens, TriAttention scores all keys in the cache and retains only the top-B, evicting the rest.    Results on Mathematical Reasoning    On AIME24 with Qwen3-8B, TriAttention achieves 42.1% accuracy against Full Attention’s 57.1%, while R-KV achieves only 25.4% at the same KV budget of 2,048 tokens. On AIME25, TriAttention achieves 32.9% versus R-KV’s 17.5% — a 15.4 percentage point gap. On MATH 500 with only 1,024 tokens in the KV cache out of a possible 32,768, TriAttention achieves 68.4% accuracy against Full Attention’s 69.6%.  The research team also introduces a Recursive State Query benchmark based on recursive simulation using depth-first search. Recursive tasks stress memory retention because the model must maintain intermediate states across long chains and backtrack to them later — if any intermediate state is evicted, the error propagates through all subsequent return values, corrupting the final result. Under moderate memory pressure up to depth 16, TriAttention performs comparably to Full Attention, while R-KV shows catastrophic accuracy degradation — dropping from approximately 61% at depth 14 to 31% at depth 16. This indicates R-KV incorrectly evicts critical intermediate reasoning states.    On throughput, TriAttention achieves 1,405 tokens per second on MATH 500 against Full Attention’s 223 tokens per second, a 6.3× speedup. On AIME25, it achieves 563.5 tokens per second against 222.8, a 2.5× speedup at matched accuracy.  Generalization Beyond Mathematical Reasoning    The results extend well beyond math benchmarks. On LongBench — a 16-subtask benchmark covering question answering, summarization, few-shot classification, retrieval, counting, and code tasks — TriAttention achieves the highest average score of 48.1 among all compression methods at a 50% KV budget on Qwen3-8B, winning 11 out of 16 subtasks and surpassing the next best baseline, Ada-KV+SnapKV, by 2.5 points. On the RULER retrieval benchmark at a 4K context length, TriAttention achieves 66.1, a 10.5-point gap over SnapKV. These results confirm that the method is not tuned to mathematical reasoning alone — the underlying Q/K concentration phenomenon transfers to general language tasks.    Key Takeaways     Existing KV cache compression methods have a fundamental blind spot: Methods like SnapKV and R-KV estimate token importance using recent post-RoPE queries, but because RoPE rotates query vectors with position, only a tiny window of queries is usable. This causes important tokens — especially those needed by retrieval heads — to be permanently evicted before they become critical.    Pre-RoPE Query and Key vectors cluster around stable, fixed centers across nearly all attention heads: This property, called Q/K concentration, holds regardless of input content, token position, or domain, and is consistent across Qwen3, Qwen2.5, Llama3, and even Multi-head Latent Attention architectures like GLM-4.7-Flash.    These stable centers make attention patterns mathematically predictable without observing any live queries: When Q/K vectors are concentrated, the attention score between any query and key reduces to a function that depends only on their positional distance — encoded as a trigonometric series. TriAttention uses this to score every cached key offline using calibration data alone.    TriAttention matches Full Attention reasoning accuracy at a fraction of the memory and compute cost: On AIME25 with 32K-token generation, it achieves 2.5× higher throughput or 10.7× KV memory reduction while matching Full Attention accuracy — nearly doubling R-KV’s accuracy at the same memory budget across both AIME24 and AIME25.    The method generalizes beyond math and works on consumer hardware. TriAttention outperforms all baselines on LongBench across 16 general NLP subtasks and on the RULER retrieval benchmark, and enables a 32B reasoning model to run on a single 24GB RTX 4090 via OpenClaw — a task that causes out-of-memory errors under Full Attention.         Check out the Paper, Repo and Project Page. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.    Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us The post Researchers from MIT, NVIDIA, and Zhejiang University Propose TriAttention: A KV Cache Compression Method That Matches Full Attention at 2.5× Higher Throughput appeared first o <a href="https://gadgets.indirootsandroutes.com/researchers-from-mit-nvidia-and-zhejiang-university-propose-triattention-a-kv-cache-compression-method-that-matches-full-attention-at-2-5x-higher-throughput/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">5d92ba07ed43ca4b22e5390ade098523</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9748</link>
				<pubDate>Sun, 12 Apr 2026 00:00:40 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/how-to-build-a-secure-local-first-agent-runtime-with-openclaw-gateway-skills-and-controlled-tool-execution/" rel="nofollow ugc">How to Build a Secure Local-First Agent Runtime with OpenClaw Gateway, Skills, and Controlled Tool Execution</a></strong>In this tutorial, we build and operate <a href="https://gadgets.indirootsandroutes.com/how-to-build-a-secure-local-first-agent-runtime-with-openclaw-gateway-skills-and-controlled-tool-execution/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">cfc155e8d42e74e32aeb22577e3bc3fd</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9747</link>
				<pubDate>Sat, 11 Apr 2026 21:04:23 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/healthcare/" rel="nofollow ugc">Healthcare</a></strong>Explore how clinicians use ChatGPT to support diagnosis, documentation, and patient care with secure, HIPAA-compliant AI tools.</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">f8f4ef30faed182d1576bb64cc665de3</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9746</link>
				<pubDate>Sat, 11 Apr 2026 18:08:24 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/responsible-and-safe-use-of-ai/" rel="nofollow ugc">Responsible and safe use of AI</a></strong>Learn how to use AI responsibly with best practices for safety, accuracy, and transparency when using tools like ChatGPT.</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">d859223d8aa0357d51f3f759ca0b0420</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9745</link>
				<pubDate>Sat, 11 Apr 2026 12:06:51 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/understanding-bertopic-from-raw-text-to-interpretable-topics/" rel="nofollow ugc">Understanding BERTopic: From Raw Text to Interpretable Topics </a></strong>Topic modeling uncovers hidden themes in large document collections. Traditional <a href="https://gadgets.indirootsandroutes.com/understanding-bertopic-from-raw-text-to-interpretable-topics/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">9224d23e1d47f50f964379cb3a34a8a8</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9744</link>
				<pubDate>Sat, 11 Apr 2026 12:02:04 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/how-knowledge-distillation-compresses-ensemble-intelligence-into-a-single-deployable-ai-model/" rel="nofollow ugc">How Knowledge Distillation Compresses Ensemble Intelligence into a Single Deployable AI Model</a></strong><a href="https://gadgets.indirootsandroutes.com/how-knowledge-distillation-compresses-ensemble-intelligence-into-a-single-deployable-ai-model/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/image-23.png" /></a> Complex prediction problems often lead to ensembles <a href="https://gadgets.indirootsandroutes.com/how-knowledge-distillation-compresses-ensemble-intelligence-into-a-single-deployable-ai-model/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">23f36efeb916e9249bd6660bc0d34adf</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9743</link>
				<pubDate>Sat, 11 Apr 2026 03:09:22 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/using-custom-gpts/" rel="nofollow ugc">Using custom GPTs</a></strong>Learn how to build and use custom GPTs to automate workflows, maintain consistent outputs, and create purpose-built AI assistants.</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">170c68ca8fd563d2449b8ee63876a285</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9742</link>
				<pubDate>Sat, 11 Apr 2026 00:22:41 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/research-with-chatgpt/" rel="nofollow ugc">Research with ChatGPT</a></strong>Learn how to research with ChatGPT using search and deep research to find up-to-date information, analyze sources, and generate structured insights.</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">e3cb9789780ce3036f6a49b094527be0</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9741</link>
				<pubDate>Sat, 11 Apr 2026 00:22:40 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/from-karpathys-llm-wiki-to-graphify-ai-memory-layers-are-here/" rel="nofollow ugc">From Karpathy’s LLM Wiki to Graphify: AI Memory Layers are Here </a></strong>Most AI workflows follow the same loop: you upload files, ask a question, get an <a href="https://gadgets.indirootsandroutes.com/from-karpathys-llm-wiki-to-graphify-ai-memory-layers-are-here/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">e8cb36558bfa0b9f609862ec4179106b</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9740</link>
				<pubDate>Sat, 11 Apr 2026 00:18:44 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/alibabas-tongyi-lab-releases-vimrag-a-multimodal-rag-framework-that-uses-a-memory-graph-to-navigate-massive-visual-contexts/" rel="nofollow ugc">Alibaba’s Tongyi Lab Releases VimRAG: a Multimodal RAG Framework that Uses a Memory Graph to Navigate Massive Visual Contexts</a></strong><a href="https://gadgets.indirootsandroutes.com/alibabas-tongyi-lab-releases-vimrag-a-multimodal-rag-framework-that-uses-a-memory-graph-to-navigate-massive-visual-contexts/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/Screenshot-2026-04-10-at-40436-PM-1.png" /></a> Retrieval-Augmented <a href="https://gadgets.indirootsandroutes.com/alibabas-tongyi-lab-releases-vimrag-a-multimodal-rag-framework-that-uses-a-memory-graph-to-navigate-massive-visual-contexts/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">8b5a26741645256600a289ebbd579d05</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9739</link>
				<pubDate>Sat, 11 Apr 2026 00:10:41 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/a-coding-guide-to-markerless-3d-human-kinematics-with-pose2sim-rtmpose-and-opensim/" rel="nofollow ugc">A Coding Guide to Markerless 3D Human Kinematics with Pose2Sim, RTMPose, and OpenSim</a></strong><a href="https://gadgets.indirootsandroutes.com/a-coding-guide-to-markerless-3d-human-kinematics-with-pose2sim-rtmpose-and-opensim/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/26a0-2.png" /></a> In this tutorial, we build and run a complete Pose2Sim pipeline <a href="https://gadgets.indirootsandroutes.com/a-coding-guide-to-markerless-3d-human-kinematics-with-pose2sim-rtmpose-and-opensim/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">d09f28ff8a454a698d4e4a414cdbbc95</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9738</link>
				<pubDate>Sat, 11 Apr 2026 00:10:35 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/new-manychat-features-youll-actually-want-to-use-pdfs-in-dms-inbox-updates-and-more/" rel="nofollow ugc">New Manychat Features You’ll Actually Want To Use: PDFs in DMs, Inbox Updates, and More</a></strong>There are product updates, and then there are product updates that make you want to sit up, crack your knuckles, and start doing slightly unwell things…</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">dffba2abac7683a18585a570480cdf0e</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9737</link>
				<pubDate>Sat, 11 Apr 2026 00:10:35 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/the-influencer-marketing-trends-changing-how-brands-and-creators-collaborate/" rel="nofollow ugc">The Influencer Marketing Trends Changing How Brands and Creators Collaborate</a></strong>Instagram marketing is moving fast. What are the latest tactics and how can you apply them to your marketing efforts? Learn the top four trends in this guide.</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">f3660b652459e41fc9760623d43e2c2a</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9736</link>
				<pubDate>Sat, 11 Apr 2026 00:10:33 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/best-ai-chatbot-platforms-for-e-commerce-in-2026-2/" rel="nofollow ugc">Best AI Chatbot Platforms for E-commerce in 2026</a></strong>In 2026, the competitive landscape of e-commerce demands more than just a compelling product; <a href="https://gadgets.indirootsandroutes.com/best-ai-chatbot-platforms-for-e-commerce-in-2026-2/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">1404ab7bb2ff3798a55be7d4a3334ad0</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9735</link>
				<pubDate>Sat, 11 Apr 2026 00:10:33 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/best-ai-chatbot-platforms-for-e-commerce-in-2026/" rel="nofollow ugc">Best AI Chatbot Platforms for E-commerce in 2026</a></strong>In 2026, the competitive landscape of e-commerce demands more than just a compelling product; it <a href="https://gadgets.indirootsandroutes.com/best-ai-chatbot-platforms-for-e-commerce-in-2026/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">fb03f35561b725d59b0b600f6a743671</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9734</link>
				<pubDate>Sat, 11 Apr 2026 00:10:26 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/whats-in-a-name-modernas-vaccine-vs-therapy-dilemma/" rel="nofollow ugc">What’s in a name? Moderna’s “vaccine” vs. “therapy” dilemma</a></strong>Is it the Department of Defense or the Department of War? The Gulf of Mexico or the Gulf of <a href="https://gadgets.indirootsandroutes.com/whats-in-a-name-modernas-vaccine-vs-therapy-dilemma/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">6a36d2a6ba03527fa71bba1bce163a7e</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9733</link>
				<pubDate>Sat, 11 Apr 2026 00:00:38 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/the-download-an-exclusive-jeff-vandermeer-story-and-ai-models-too-scary-to-release/" rel="nofollow ugc">The Download: an exclusive Jeff VanderMeer story and AI models too scary to release</a></strong><a href="https://gadgets.indirootsandroutes.com/the-download-an-exclusive-jeff-vandermeer-story-and-ai-models-too-scary-to-release/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/Users_Glass-thumb.jpg" /></a> This is today’s edition of The Download, our weekday n <a href="https://gadgets.indirootsandroutes.com/the-download-an-exclusive-jeff-vandermeer-story-and-ai-models-too-scary-to-release/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">a71c8436b07af8a0754b7eb1109bb0ca</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9732</link>
				<pubDate>Fri, 10 Apr 2026 21:03:14 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/using-projects-in-chatgpt/" rel="nofollow ugc">Using projects in ChatGPT</a></strong>Learn how to use orojects in ChatGPT to organize chats, files, and instructions, manage ongoing work, and collaborate more effectively.</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">ffd8c793b0273304f1c9fbd8b2bf5a3f</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9731</link>
				<pubDate>Fri, 10 Apr 2026 18:05:33 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/getting-started-with-chatgpt/" rel="nofollow ugc">Getting started with ChatGPT</a></strong>Learn how to use ChatGPT, start your first conversation, and discover simple ways to write, brainstorm, and solve problems with AI.</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">eaa20de0e656dcc0a0f493cf6caa9999</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9730</link>
				<pubDate>Fri, 10 Apr 2026 12:09:42 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/10-most-important-ai-concepts-explained-simply/" rel="nofollow ugc">10 Most Important AI Concepts Explained Simply</a></strong>AI can feel like a maze sometimes. Everywhere you look, people on social media and in meetings are <a href="https://gadgets.indirootsandroutes.com/10-most-important-ai-concepts-explained-simply/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">b3c5990ebf596c0168bba594878f9da5</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9729</link>
				<pubDate>Fri, 10 Apr 2026 12:00:49 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/five-ai-compute-architectures-every-engineer-should-know-cpus-gpus-tpus-npus-and-lpus-compared/" rel="nofollow ugc">Five AI Compute Architectures Every Engineer Should Know: CPUs, GPUs, TPUs, NPUs, and LPUs Compared</a></strong><a href="https://gadgets.indirootsandroutes.com/five-ai-compute-architectures-every-engineer-should-know-cpus-gpus-tpus-npus-and-lpus-compared/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/image-16.png" /></a> Modern AI is no longer powered by a single type of <a href="https://gadgets.indirootsandroutes.com/five-ai-compute-architectures-every-engineer-should-know-cpus-gpus-tpus-npus-and-lpus-compared/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">b8acb232cee084c2d56d4c0af37fad52</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9728</link>
				<pubDate>Fri, 10 Apr 2026 12:00:48 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/an-end-to-end-coding-guide-to-nvidia-kvpress-for-long-context-llm-inference-kv-cache-compression-and-memory-efficient-generation/" rel="nofollow ugc">An End-to-End Coding Guide to NVIDIA KVPress for Long-Context LLM Inference, KV Cache Compression, and Memory-Efficient Generation</a></strong>In this tutorial, <a href="https://gadgets.indirootsandroutes.com/an-end-to-end-coding-guide-to-nvidia-kvpress-for-long-context-llm-inference-kv-cache-compression-and-memory-efficient-generation/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">cd5e0e1cc111d3b9c192c3c81069351e</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9727</link>
				<pubDate>Fri, 10 Apr 2026 12:00:26 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/constellations/" rel="nofollow ugc">Constellations</a></strong>I.     We had crash-landed on the planet. We were far from home. The spaceship could not be repaired, and the rescue beacon had <a href="https://gadgets.indirootsandroutes.com/constellations/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">0677d95374ebc049bf3aea6d561f9f6f</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9726</link>
				<pubDate>Fri, 10 Apr 2026 00:21:21 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/cyberagent-moves-faster-with-chatgpt-enterprise-and-codex/" rel="nofollow ugc">CyberAgent moves faster with ChatGPT Enterprise and Codex</a></strong>CyberAgent uses ChatGPT Enterprise and Codex to securely scale AI adoption, improve quality, and accelerate decisions across advertising, media, and gaming.</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">b86ab4703f517ecd4c7a072a5130501a</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9725</link>
				<pubDate>Fri, 10 Apr 2026 00:21:21 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/project-glasswing-is-worlds-most-powerful-ai-in-action/" rel="nofollow ugc">Project Glasswing is World’s Most Powerful AI in Action</a></strong>We already had a hint. AI would surpass most human capabilities someday. In the field of <a href="https://gadgets.indirootsandroutes.com/project-glasswing-is-worlds-most-powerful-ai-in-action/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">a6705241db13f745148865547adda5e6</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9724</link>
				<pubDate>Fri, 10 Apr 2026 00:21:16 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/convapparel-measuring-and-bridging-the-realism-gap-in-user-simulators/" rel="nofollow ugc">ConvApparel: Measuring and bridging the realism gap in user simulators</a></strong>Generative AI</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">6bcfb3846fc4fa1770bb084896f01574</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9723</link>
				<pubDate>Fri, 10 Apr 2026 00:11:55 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/meta-superintelligence-lab-releases-muse-spark-a-multimodal-reasoning-model-with-thought-compression-and-parallel-agents/" rel="nofollow ugc">Meta Superintelligence Lab Releases Muse Spark: A Multimodal Reasoning Model With Thought Compression and Parallel Agents</a></strong><a href="https://gadgets.indirootsandroutes.com/meta-superintelligence-lab-releases-muse-spark-a-multimodal-reasoning-model-with-thought-compression-and-parallel-agents/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/Screenshot-2026-04-09-at-40345-PM-1.png" /></a> Meta Superintelligence <a href="https://gadgets.indirootsandroutes.com/meta-superintelligence-lab-releases-muse-spark-a-multimodal-reasoning-model-with-thought-compression-and-parallel-agents/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">47363306c6535321f317458fdd6a96ae</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9722</link>
				<pubDate>Fri, 10 Apr 2026 00:11:37 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/the-download-astroturf-wars-and-exponential-ai-growth/" rel="nofollow ugc">The Download: AstroTurf wars and exponential AI growth</a></strong>This is today’s edition of The Download, our weekday newsletter that provides a daily dose o <a href="https://gadgets.indirootsandroutes.com/the-download-astroturf-wars-and-exponential-ai-growth/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">4533d78b32337826bdea5bf16712574c</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9721</link>
				<pubDate>Thu, 09 Apr 2026 15:05:22 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/openai-full-fan-mode-contest-terms-conditions/" rel="nofollow ugc">OpenAI Full Fan Mode Contest: Terms &amp; Conditions</a></strong>Explore the official terms and conditions for the OpenAI Full Fan Mode Contest, including <a href="https://gadgets.indirootsandroutes.com/openai-full-fan-mode-contest-terms-conditions/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">7688d9f1f11c00fe46f1e2afb89fe0c4</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9720</link>
				<pubDate>Thu, 09 Apr 2026 12:09:00 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/sigmoid-vs-relu-activation-functions-the-inference-cost-of-losing-geometric-context/" rel="nofollow ugc">Sigmoid vs ReLU Activation Functions: The Inference Cost of Losing Geometric Context</a></strong><a href="https://gadgets.indirootsandroutes.com/sigmoid-vs-relu-activation-functions-the-inference-cost-of-losing-geometric-context/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/image-11.png" /></a> A deep neural network can be understood as a geometric system, <a href="https://gadgets.indirootsandroutes.com/sigmoid-vs-relu-activation-functions-the-inference-cost-of-losing-geometric-context/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">88533f0a56a057e46f186e723fd617fb</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9719</link>
				<pubDate>Thu, 09 Apr 2026 12:08:59 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/a-coding-guide-to-build-advanced-document-intelligence-pipelines-with-google-langextract-openai-models-structured-extraction-and-interactive-visualization/" rel="nofollow ugc">A Coding Guide to Build Advanced Document Intelligence Pipelines with Google LangExtract, OpenAI Models, Structured Extraction, and Interactive Visualization</a></strong>In this tutorial, we explore how to use Google’s LangExtract library to transform unstructured text into structured, machine-readable information. We begin by installing the required dependencies and securely configuring our OpenAI API key to leverage powerful language models for extraction tasks. Also, we will build a reusable extraction pipeline that enables us to process a range of document types, including contracts, meeting notes, product announcements, and operational logs. Through carefully designed prompts and example annotations, we demonstrate how LangExtract can identify entities, actions, deadlines, risks, and other structured attributes while grounding them to their exact source spans. We also visualize the extracted information and organize it into tabular datasets, enabling downstream analytics, automation workflows, and decision-making systems.    Copy CodeCopiedUse a different Browser!pip -q install -U &#8220;langextract[openai]&#8221; pandas IPython   import os import json import textwrap import getpass import pandas as pd   OPENAI_API_KEY = getpass.getpass(&#8220;Enter OPENAI_API_KEY: &#8220;) os.environ[&#8220;OPENAI_API_KEY&#8221;] = OPENAI_API_KEY   import langextract as lx from IPython.display import display, HTML    We install the required libraries, including LangExtract, Pandas, and IPython, so that our Colab environment is ready for structured extraction tasks. We securely request the OpenAI API key from the user and store it as an environment variable for safe access during runtime. We then import the core libraries needed to run LangExtract, display results, and handle structured outputs.    Copy CodeCopiedUse a different BrowserMODEL_ID = &#8220;gpt-4o-mini&#8221;   def run_extraction(    text_or_documents,    prompt_description,    examples,    output_stem,    model_id=MODEL_ID,    extraction_passes=1,    max_workers=4,    max_char_buffer=1800, ):    result = lx.extract(        text_or_documents=text_or_documents,        prompt_description=prompt_description,        examples=examples,        model_id=model_id,        api_key=os.environ[&#8220;OPENAI_API_KEY&#8221;],        fence_output=True,        use_schema_constraints=False,        extraction_passes=extraction_passes,        max_workers=max_workers,        max_char_buffer=max_char_buffer,    )      jsonl_name = f&#8221;{output_stem}.jsonl&#8221;    html_name = f&#8221;{output_stem}.html&#8221;      lx.io.save_annotated_documents([result], output_name=jsonl_name, output_dir=&#8221;.&#8221;)    html_content = lx.visualize(jsonl_name)      with open(html_name, &#8220;w&#8221;, encoding=&#8221;utf-8&#8243;) as f:        if hasattr(html_content, &#8220;data&#8221;):            f.write(html_content.data)        else:            f.write(html_content)      return result, jsonl_name, html_name   def extraction_rows(result):    rows = []    for ex in result.extractions:        start_pos = None        end_pos = None        if getattr(ex, &#8220;char_interval&#8221;, None):            start_pos = ex.char_interval.start_pos            end_pos = ex.char_interval.end_pos          rows.append({            &#8220;class&#8221;: ex.extraction_class,            &#8220;text&#8221;: ex.extraction_text,            &#8220;attributes&#8221;: json.dumps(ex.attributes or {}, ensure_ascii=False),            &#8220;start&#8221;: start_pos,            &#8220;end&#8221;: end_pos,        })    return pd.DataFrame(rows)   def preview_result(title, result, html_name, max_rows=50):    print(&#8220;=&#8221; * 80)    print(title)    print(&#8220;=&#8221; * 80)    print(f&#8221;Total extractions: {len(result.extractions)}&#8221;)    df = extraction_rows(result)    display(df.head(max_rows))    display(HTML(f&#8217;Open interactive visualization: {html_name}&#8217;))    We define the core utility functions that power the entire extraction pipeline. We create a reusable run_extraction function that sends text to the LangExtract engine and generates both JSONL and HTML outputs. We also define helper functions to convert the extraction results into tabular rows and preview them interactively in the notebook.    Copy CodeCopiedUse a different Browsercontract_prompt = textwrap.dedent(&#8220;&#8221;&#8221; Extract contract-risk information in order of appearance.   Rules: 1. Use exact text spans from the source. Do not paraphrase extraction_text. 2. Extract the following classes when present:   &#8211; party   &#8211; obligation   &#8211; deadline   &#8211; payment_term   &#8211; penalty   &#8211; termination_clause   &#8211; governing_law 3. Add useful attributes:   &#8211; party_name for obligations or payment terms when relevant   &#8211; risk_level as low, medium, or high   &#8211; category for the business meaning 4. Keep output grounded to the exact wording in the source. 5. Do not merge non-contiguous spans into one extraction. &#8220;&#8221;&#8221;)   contract_examples = [    lx.data.ExampleData(        text=(            &#8220;Acme Corp shall deliver the equipment by March 15, 2026. &#8221;            &#8220;The Client must pay within 10 days of invoice receipt. &#8221;            &#8220;Late payment incurs a 2% monthly penalty. &#8221;            &#8220;This agreement is governed by the laws of Ontario.&#8221;        ),        extractions=[            lx.data.Extraction(                extraction_class=&#8221;party&#8221;,                extraction_text=&#8221;Acme Corp&#8221;,                attributes={&#8220;category&#8221;: &#8220;supplier&#8221;, &#8220;risk_level&#8221;: &#8220;low&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;obligation&#8221;,                extraction_text=&#8221;shall deliver the equipment&#8221;,                attributes={&#8220;party_name&#8221;: &#8220;Acme Corp&#8221;, &#8220;category&#8221;: &#8220;delivery&#8221;, &#8220;risk_level&#8221;: &#8220;medium&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;deadline&#8221;,                extraction_text=&#8221;by March 15, 2026&#8243;,                attributes={&#8220;category&#8221;: &#8220;delivery_deadline&#8221;, &#8220;risk_level&#8221;: &#8220;medium&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;party&#8221;,                extraction_text=&#8221;The Client&#8221;,                attributes={&#8220;category&#8221;: &#8220;customer&#8221;, &#8220;risk_level&#8221;: &#8220;low&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;payment_term&#8221;,                extraction_text=&#8221;must pay within 10 days of invoice receipt&#8221;,                attributes={&#8220;party_name&#8221;: &#8220;The Client&#8221;, &#8220;category&#8221;: &#8220;payment&#8221;, &#8220;risk_level&#8221;: &#8220;medium&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;penalty&#8221;,                extraction_text=&#8221;2% monthly penalty&#8221;,                attributes={&#8220;category&#8221;: &#8220;late_payment&#8221;, &#8220;risk_level&#8221;: &#8220;high&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;governing_law&#8221;,                extraction_text=&#8221;laws of Ontario&#8221;,                attributes={&#8220;category&#8221;: &#8220;legal_jurisdiction&#8221;, &#8220;risk_level&#8221;: &#8220;low&#8221;}            ),        ]    ) ]   contract_text = &#8220;&#8221;&#8221; BluePeak Analytics shall provide a production-ready dashboard and underlying ETL pipeline no later than April 30, 2026. North Ridge Manufacturing will remit payment within 7 calendar days after final acceptance. If payment is delayed beyond 15 days, BluePeak Analytics may suspend support services and charge interest at 1.5% per month. This Agreement shall be governed by the laws of British Columbia. &#8220;&#8221;&#8221;   contract_result, contract_jsonl, contract_html = run_extraction(    text_or_documents=contract_text,    prompt_description=contract_prompt,    examples=contract_examples,    output_stem=&#8221;contract_risk_extraction&#8221;,    extraction_passes=2,    max_workers=4,    max_char_buffer=1400, )   preview_result(&#8220;USE CASE 1 — Contract risk extraction&#8221;, contract_result, contract_html)    We build a contract intelligence extraction workflow by defining a detailed prompt and structured examples. We provide LangExtract with annotated training-style examples so that it understands how to identify entities such as obligations, deadlines, penalties, and governing laws. We then run the extraction pipeline on a contract text and preview the structured risk-related outputs.    Copy CodeCopiedUse a different Browsermeeting_prompt = textwrap.dedent(&#8220;&#8221;&#8221; Extract action items from meeting notes in order of appearance.   Rules: 1. Use exact text spans from the source. No paraphrasing in extraction_text. 2. Extract these classes when present:   &#8211; assignee   &#8211; action_item   &#8211; due_date   &#8211; blocker   &#8211; decision 3. Add attributes:   &#8211; priority as low, medium, or high   &#8211; workstream when inferable from local context   &#8211; owner for action_item when tied to a named assignee 4. Keep all spans grounded to the source text. 5. Preserve order of appearance. &#8220;&#8221;&#8221;)   meeting_examples = [    lx.data.ExampleData(        text=(            &#8220;Sarah will finalize the launch email by Friday. &#8221;            &#8220;The team decided to postpone the webinar. &#8221;            &#8220;Blocked by missing legal approval.&#8221;        ),        extractions=[            lx.data.Extraction(                extraction_class=&#8221;assignee&#8221;,                extraction_text=&#8221;Sarah&#8221;,                attributes={&#8220;priority&#8221;: &#8220;medium&#8221;, &#8220;workstream&#8221;: &#8220;marketing&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;action_item&#8221;,                extraction_text=&#8221;will finalize the launch email&#8221;,                attributes={&#8220;owner&#8221;: &#8220;Sarah&#8221;, &#8220;priority&#8221;: &#8220;high&#8221;, &#8220;workstream&#8221;: &#8220;marketing&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;due_date&#8221;,                extraction_text=&#8221;by Friday&#8221;,                attributes={&#8220;priority&#8221;: &#8220;medium&#8221;, &#8220;workstream&#8221;: &#8220;marketing&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;decision&#8221;,                extraction_text=&#8221;decided to postpone the webinar&#8221;,                attributes={&#8220;priority&#8221;: &#8220;medium&#8221;, &#8220;workstream&#8221;: &#8220;events&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;blocker&#8221;,                extraction_text=&#8221;missing legal approval&#8221;,                attributes={&#8220;priority&#8221;: &#8220;high&#8221;, &#8220;workstream&#8221;: &#8220;compliance&#8221;}            ),        ]    ) ]   meeting_text = &#8220;&#8221;&#8221; Arjun will prepare the revised pricing sheet by Tuesday evening. Mina to confirm the enterprise customer&#8217;s data residency requirements this week. The group agreed to ship the pilot only for the Oman region first. Blocked by pending security review from the client&#8217;s IT team. Ravi will draft the rollback plan before the production cutover. &#8220;&#8221;&#8221;   meeting_result, meeting_jsonl, meeting_html = run_extraction(    text_or_documents=meeting_text,    prompt_description=meeting_prompt,    examples=meeting_examples,    output_stem=&#8221;meeting_action_extraction&#8221;,    extraction_passes=2,    max_workers=4,    max_char_buffer=1400, )   preview_result(&#8220;USE CASE 2 — Meeting notes to action tracker&#8221;, meeting_result, meeting_html)    We design a meeting intelligence extractor that focuses on action items, decisions, assignees, and blockers. We again provide example annotations to help the model structure meet information consistently. We execute the extraction on meeting notes and display the resulting structured task tracker.    Copy CodeCopiedUse a different Browserlongdoc_prompt = textwrap.dedent(&#8220;&#8221;&#8221; Extract product launch intelligence in order of appearance.   Rules: 1. Use exact text spans from the source. 2. Extract:   &#8211; company   &#8211; product   &#8211; launch_date   &#8211; region   &#8211; metric   &#8211; partnership 3. Add attributes:   &#8211; category   &#8211; significance as low, medium, or high 4. Keep the extraction grounded in the original text. 5. Do not paraphrase the extracted span. &#8220;&#8221;&#8221;)   longdoc_examples = [    lx.data.ExampleData(        text=(            &#8220;Nova Robotics launched Atlas Mini in Europe on 12 January 2026. &#8221;            &#8220;The company reported 18% faster picking speed and partnered with Helix Warehousing.&#8221;        ),        extractions=[            lx.data.Extraction(                extraction_class=&#8221;company&#8221;,                extraction_text=&#8221;Nova Robotics&#8221;,                attributes={&#8220;category&#8221;: &#8220;vendor&#8221;, &#8220;significance&#8221;: &#8220;medium&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;product&#8221;,                extraction_text=&#8221;Atlas Mini&#8221;,                attributes={&#8220;category&#8221;: &#8220;product_name&#8221;, &#8220;significance&#8221;: &#8220;high&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;region&#8221;,                extraction_text=&#8221;Europe&#8221;,                attributes={&#8220;category&#8221;: &#8220;market&#8221;, &#8220;significance&#8221;: &#8220;medium&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;launch_date&#8221;,                extraction_text=&#8221;12 January 2026&#8243;,                attributes={&#8220;category&#8221;: &#8220;timeline&#8221;, &#8220;significance&#8221;: &#8220;medium&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;metric&#8221;,                extraction_text=&#8221;18% faster picking speed&#8221;,                attributes={&#8220;category&#8221;: &#8220;performance_claim&#8221;, &#8220;significance&#8221;: &#8220;high&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;partnership&#8221;,                extraction_text=&#8221;partnered with Helix Warehousing&#8221;,                attributes={&#8220;category&#8221;: &#8220;go_to_market&#8221;, &#8220;significance&#8221;: &#8220;medium&#8221;}            ),        ]    ) ]   long_text = &#8220;&#8221;&#8221; Vertex Dynamics introduced FleetSense 3.0 for industrial logistics teams across the GCC on 5 February 2026. The company said the release improves the accuracy of route deviation detection by 22% and reduces manual review time by 31%. In the first rollout phase, the platform will support Oman and the United Arab Emirates. Vertex Dynamics also partnered with Falcon Telematics to integrate live driver behavior events into the dashboard.   A week later, FleetSense 3.0 added a risk-scoring module for safety managers. The update gives supervisors a daily ranked list of high-risk trips and exception events. The company described the module as especially valuable for oilfield transport operations and contractor fleet audits.   By late February 2026, the team announced a pilot with Desert Haul Services. The pilot covers 240 heavy vehicles and focuses on speeding up incident triage, compliance review, and evidence retrieval. Internal testing showed analysts could assemble review packets in under 8 minutes instead of the previous 20 minutes. &#8220;&#8221;&#8221;   longdoc_result, longdoc_jsonl, longdoc_html = run_extraction(    text_or_documents=long_text,    prompt_description=longdoc_prompt,    examples=longdoc_examples,    output_stem=&#8221;long_document_extraction&#8221;,    extraction_passes=3,    max_workers=8,    max_char_buffer=1000, )   preview_result(&#8220;USE CASE 3 — Long-document extraction&#8221;, longdoc_result, longdoc_html)   batch_docs = [    &#8220;&#8221;&#8221;    The supplier must replace defective batteries within 14 days of written notice.    Any unresolved safety issue may trigger immediate suspension of shipments.    &#8220;&#8221;&#8221;,    &#8220;&#8221;&#8221;    Priya will circulate the revised onboarding checklist tomorrow morning.    The team approved the API deprecation plan for the legacy endpoint.    &#8220;&#8221;&#8221;,    &#8220;&#8221;&#8221;    Orbit Health launched a remote triage assistant in Singapore on 14 March 2026.    The company claims the assistant reduces nurse intake time by 17%.    &#8220;&#8221;&#8221; ]   batch_prompt = textwrap.dedent(&#8220;&#8221;&#8221; Extract operationally useful spans in order of appearance.   Allowed classes: &#8211; obligation &#8211; deadline &#8211; penalty &#8211; assignee &#8211; action_item &#8211; decision &#8211; company &#8211; product &#8211; launch_date &#8211; metric   Use exact text only and attach a simple attribute: &#8211; source_type &#8220;&#8221;&#8221;)   batch_examples = [    lx.data.ExampleData(        text=&#8221;Jordan will submit the report by Monday. Late delivery incurs a service credit.&#8221;,        extractions=[            lx.data.Extraction(                extraction_class=&#8221;assignee&#8221;,                extraction_text=&#8221;Jordan&#8221;,                attributes={&#8220;source_type&#8221;: &#8220;meeting&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;action_item&#8221;,                extraction_text=&#8221;will submit the report&#8221;,                attributes={&#8220;source_type&#8221;: &#8220;meeting&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;deadline&#8221;,                extraction_text=&#8221;by Monday&#8221;,                attributes={&#8220;source_type&#8221;: &#8220;meeting&#8221;}            ),            lx.data.Extraction(                extraction_class=&#8221;penalty&#8221;,                extraction_text=&#8221;service credit&#8221;,                attributes={&#8220;source_type&#8221;: &#8220;contract&#8221;}            ),        ]    ) ]   batch_results = [] for idx, doc in enumerate(batch_docs, start=1):    res, jsonl_name, html_name = run_extraction(        text_or_documents=doc,        prompt_description=batch_prompt,        examples=batch_examples,        output_stem=f&#8221;batch_doc_{idx}&#8221;,        extraction_passes=2,        max_workers=4,        max_char_buffer=1200,    )    df = extraction_rows(res)    df.insert(0, &#8220;document_id&#8221;, idx)    batch_results.append(df)    print(f&#8221;Finished document {idx} -&gt; {html_name}&#8221;)   batch_df = pd.concat(batch_results, ignore_index=True) print(&#8220;nCombined batch output&#8221;) display(batch_df)   print(&#8220;nContract extraction counts by class&#8221;) display(    extraction_rows(contract_result)    .groupby(&#8220;class&#8221;, as_index=False)    .size()    .sort_values(&#8220;size&#8221;, ascending=False) )   print(&#8220;nMeeting action items only&#8221;) meeting_df = extraction_rows(meeting_result) display(meeting_df[meeting_df[&#8220;class&#8221;] == &#8220;action_item&#8221;])   print(&#8220;nLong-document metrics only&#8221;) longdoc_df = extraction_rows(longdoc_result) display(longdoc_df[longdoc_df[&#8220;class&#8221;] == &#8220;metric&#8221;])   final_df = pd.concat([    extraction_rows(contract_result).assign(use_case=&#8221;contract_risk&#8221;),    extraction_rows(meeting_result).assign(use_case=&#8221;meeting_actions&#8221;),    extraction_rows(longdoc_result).assign(use_case=&#8221;long_document&#8221;), ], ignore_index=True)   final_df.to_csv(&#8220;langextract_tutorial_outputs.csv&#8221;, index=False) print(&#8220;nSaved CSV: langextract_tutorial_outputs.csv&#8221;)   print(&#8220;nGenerated files:&#8221;) for name in [    contract_jsonl, contract_html,    meeting_jsonl, meeting_html,    longdoc_jsonl, longdoc_html,    &#8220;langextract_tutorial_outputs.csv&#8221; ]:    print(&#8221; -&#8220;, name)    We implement a long-document intelligence pipeline capable of extracting structured insights from large narrative text. We run the extraction across product launch reports and operational documents, and also demonstrate batch processing across multiple documents. We also analyze the extracted results, filter key classes, and export the structured dataset to a CSV file for downstream analysis.    In conclusion, we constructed an advanced LangExtract workflow that converts complex text documents into structured datasets with traceable source grounding. We ran multiple extraction scenarios, including contract risk analysis, meeting action tracking, long-document intelligence extraction, and batch processing across multiple documents. We also visualized the extractions and exported the final structured results into a CSV file for further analysis. Through this process, we saw how prompt design, example-based extraction, and scalable processing techniques allow us to build robust information extraction systems with minimal code.        Check out the Full Codes here. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.    Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us The post A Coding Guide to Build Advanced Document Intelligence Pipelines with Google LangExtract, OpenAI Models, Structured Extraction, and Interactive Visualization appeared first on Ma <a href="https://gadgets.indirootsandroutes.com/a-coding-guide-to-build-advanced-document-intelligence-pipelines-with-google-langextract-openai-models-structured-extraction-and-interactive-visualization/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">77cde4d59b6bb58854b864e0921376f2</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9718</link>
				<pubDate>Thu, 09 Apr 2026 12:01:59 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/conversational-ai-for-customer-service/" rel="nofollow ugc">Conversational AI for Customer Service</a></strong><a href="https://gadgets.indirootsandroutes.com/conversational-ai-for-customer-service/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/Blog-Post-Hero-Image-Template-1.png" /></a> Customer expectations have never been higher than they are today.  People want fast answers, <a href="https://gadgets.indirootsandroutes.com/conversational-ai-for-customer-service/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">0cc6d2cee924974533183c8e15cfe8cc</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9717</link>
				<pubDate>Thu, 09 Apr 2026 12:01:44 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/is-fake-grass-a-bad-idea-the-astroturf-wars-are-far-from-over/" rel="nofollow ugc">Is fake grass a bad idea? The AstroTurf wars are far from over.</a></strong><a href="https://gadgets.indirootsandroutes.com/is-fake-grass-a-bad-idea-the-astroturf-wars-are-far-from-over/" rel="nofollow ugc"><img loading="lazy" src="http://visualisation/28368035?1184216" /></a> A rare warm spell in January melted enough snow to uncover Cornell University’s n <a href="https://gadgets.indirootsandroutes.com/is-fake-grass-a-bad-idea-the-astroturf-wars-are-far-from-over/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">3a862d091d66e9bd4b1603dcb377b811</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9716</link>
				<pubDate>Thu, 09 Apr 2026 12:01:43 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/desalination-technology-by-the-numbers/" rel="nofollow ugc">Desalination technology, by the numbers</a></strong><a href="https://gadgets.indirootsandroutes.com/desalination-technology-by-the-numbers/" rel="nofollow ugc"><img loading="lazy" src="http://visualisation/28411876?1184216" /></a> When I started digging into desalination technology for a new story, I couldn’t help but obsess over the n <a href="https://gadgets.indirootsandroutes.com/desalination-technology-by-the-numbers/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">f5f5413cdb75c5dc0f7e20ac41d65d4e</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9715</link>
				<pubDate>Thu, 09 Apr 2026 00:18:06 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/the-next-phase-of-enterprise-ai/" rel="nofollow ugc">The next phase of enterprise AI</a></strong>OpenAI outlines the next phase of enterprise AI, as adoption accelerates across industries with Frontier, ChatGPT Enterprise, Codex, and company-wide AI agents.</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">33b4460f0768046acadb814a3a87422f</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9714</link>
				<pubDate>Thu, 09 Apr 2026 00:18:05 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/how-to-run-gemma-4-on-your-phone-without-internet-a-hands-on-guide/" rel="nofollow ugc">How to Run Gemma 4 on Your Phone Without Internet: A Hands-On Guide </a></strong>Most AI tools rely on the internet, sending your prompts to remote servers for <a href="https://gadgets.indirootsandroutes.com/how-to-run-gemma-4-on-your-phone-without-internet-a-hands-on-guide/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">d631872a8a73a4cda186d084f5764a4a</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9713</link>
				<pubDate>Thu, 09 Apr 2026 00:18:04 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/running-gemma-4-locally-with-ollama-on-your-pc/" rel="nofollow ugc">Running Gemma 4 Locally with Ollama on Your PC</a></strong>Open-weight models are driving the latest excitement in the AI landscape. Running powerful models <a href="https://gadgets.indirootsandroutes.com/running-gemma-4-locally-with-ollama-on-your-pc/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">d8682ce889c7e73a1290ab0018589d48</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9712</link>
				<pubDate>Thu, 09 Apr 2026 00:17:59 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/improving-the-academic-workflow-introducing-two-ai-agents-for-better-figures-and-peer-review/" rel="nofollow ugc">Improving the academic workflow: Introducing two AI agents for better figures and peer review</a></strong>Generative AI</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">69713f5fa331d2768d3aa82d325bf6b8</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9711</link>
				<pubDate>Thu, 09 Apr 2026 00:13:43 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/a-comprehensive-implementation-guide-to-modelscope-for-model-search-inference-fine-tuning-evaluation-and-export/" rel="nofollow ugc">A Comprehensive Implementation Guide to ModelScope for Model Search, Inference, Fine-Tuning, Evaluation, and Export</a></strong><a href="https://gadgets.indirootsandroutes.com/a-comprehensive-implementation-guide-to-modelscope-for-model-search-inference-fine-tuning-evaluation-and-export/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/2705-3.png" /></a> In this tutorial, we explore <a href="https://gadgets.indirootsandroutes.com/a-comprehensive-implementation-guide-to-modelscope-for-model-search-inference-fine-tuning-evaluation-and-export/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">c11adec2ac6d30f202cb404168b68834</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9710</link>
				<pubDate>Thu, 09 Apr 2026 00:06:58 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/meet-osgym-a-new-os-infrastructure-framework-that-manages-1000-replicas-at-0-23-day-for-computer-use-agent-research/" rel="nofollow ugc">Meet OSGym: A New OS Infrastructure Framework That Manages 1,000+ Replicas at $0.23/Day for Computer Use Agent Research</a></strong><a href="https://gadgets.indirootsandroutes.com/meet-osgym-a-new-os-infrastructure-framework-that-manages-1000-replicas-at-0-23-day-for-computer-use-agent-research/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/Screenshot-2026-04-08-at-25344-PM-1.png" /></a> Training AI agents that can <a href="https://gadgets.indirootsandroutes.com/meet-osgym-a-new-os-infrastructure-framework-that-manages-1000-replicas-at-0-23-day-for-computer-use-agent-research/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">904e74c73e7b47c4906e1c3ec8cd04c3</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9709</link>
				<pubDate>Thu, 09 Apr 2026 00:06:40 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/mustafa-suleyman-ai-development-wont-hit-a-wall-anytime-soon-heres-why/" rel="nofollow ugc">Mustafa Suleyman: AI development won’t hit a wall anytime soon—here’s why</a></strong>We evolved for a linear world. If you walk for an hour, you cover a certain <a href="https://gadgets.indirootsandroutes.com/mustafa-suleyman-ai-development-wont-hit-a-wall-anytime-soon-heres-why/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">f96b802897581f1790d06e3df3d6aa8b</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9708</link>
				<pubDate>Thu, 09 Apr 2026 00:06:40 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/the-download-water-threats-in-iran-and-ais-impact-on-what-entrepreneurs-make/" rel="nofollow ugc">The Download: water threats in Iran and AI’s impact on what entrepreneurs make</a></strong>This is today’s edition of The Download, our weekday newsletter t <a href="https://gadgets.indirootsandroutes.com/the-download-water-threats-in-iran-and-ais-impact-on-what-entrepreneurs-make/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">dbcbbb7a100f043c5ea97b84bab7a0a3</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9707</link>
				<pubDate>Wed, 08 Apr 2026 12:08:58 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/introducing-the-child-safety-blueprint/" rel="nofollow ugc">Introducing the Child Safety Blueprint</a></strong>Discover OpenAI’s Child Safety Blueprint—a roadmap for building AI responsibly with safeguards, age-appropriate design, and collaboration to protect and empower young people online.</p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">e7a44b70af1c15e7c541dcd9645137b2</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9706</link>
				<pubDate>Wed, 08 Apr 2026 12:08:54 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/z-ai-introduces-glm-5-1-an-open-weight-754b-agentic-model-that-achieves-sota-on-swe-bench-pro-and-sustains-8-hour-autonomous-execution/" rel="nofollow ugc">Z.AI Introduces GLM-5.1: An Open-Weight 754B Agentic Model That Achieves SOTA on SWE-Bench Pro and Sustains 8-Hour Autonomous Execution</a></strong>Z.AI, the <a href="https://gadgets.indirootsandroutes.com/z-ai-introduces-glm-5-1-an-open-weight-754b-agentic-model-that-achieves-sota-on-swe-bench-pro-and-sustains-8-hour-autonomous-execution/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">a2b08e405fdbbf080dffb71b7c4bc618</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9705</link>
				<pubDate>Wed, 08 Apr 2026 12:01:30 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/how-to-combine-google-search-google-maps-and-custom-functions-in-a-single-gemini-api-call-with-context-circulation-parallel-tool-ids-and-multi-step-agentic-chains/" rel="nofollow ugc">How to Combine Google Search, Google Maps, and Custom Functions in a Single Gemini API Call With Context Circulation, Parallel Tool IDs, and Multi-Step Agentic Chains</a></strong><a href="https://gadgets.indirootsandroutes.com/how-to-combine-google-search-google-maps-and-custom-functions-in-a-single-gemini-api-call-with-context-circulation-parallel-tool-ids-and-multi-step-agentic-chains/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/25b6.png" /></a> In this tutorial, we explore the latest Gemini API tooling updates Google announced in March 2026, specifically the ability to combine built-in tools like Google Search and Google Maps with custom function calls in a single API request. We walk through five hands-on demos that progressively build on each other, starting with the core tool combination feature and ending with a full multi-tool agentic chain. Along the way, we demonstrate how context circulation preserves every tool call and response across turns, enabling the model to reason over prior outputs; how unique tool response IDs let us map parallel function calls to their exact results; and how Grounding with Google Maps brings real-time location data into our applications. We use gemini-3-flash-preview for tool combination features and gemini-2.5-flash for Maps grounding, so everything we build here runs without any billing setup.    Copy CodeCopiedUse a different Browserimport subprocess, sys   subprocess.check_call(    [sys.executable, &#8220;-m&#8221;, &#8220;pip&#8221;, &#8220;install&#8221;, &#8220;-qU&#8221;, &#8220;google-genai&#8221;],    stdout=subprocess.DEVNULL,    stderr=subprocess.DEVNULL, )   import getpass, json, textwrap, os, time from google import genai from google.genai import types   if &#8220;GOOGLE_API_KEY&#8221; not in os.environ:    os.environ[&#8220;GOOGLE_API_KEY&#8221;] = getpass.getpass(&#8220;Enter your Gemini API key: &#8220;)   client = genai.Client(api_key=os.environ[&#8220;GOOGLE_API_KEY&#8221;])   TOOL_COMBO_MODEL = &#8220;gemini-3-flash-preview&#8221; MAPS_MODEL       = &#8220;gemini-2.5-flash&#8221;   DIVIDER = &#8220;=&#8221; * 72   def heading(title: str):    print(f&#8221;n{DIVIDER}&#8221;)    print(f&#8221;  {title}&#8221;)    print(DIVIDER)   def wrap(text: str, width: int = 80):    for line in text.splitlines():        print(textwrap.fill(line, width=width) if line.strip() else &#8220;&#8221;)   def describe_parts(response):    parts = response.candidates[0].content.parts    fc_ids = {}    for i, part in enumerate(parts):        prefix = f&#8221;   Part {i:2d}:&#8221;        if hasattr(part, &#8220;tool_call&#8221;) and part.tool_call:            tc = part.tool_call            print(f&#8221;{prefix} [toolCall]        type={tc.tool_type}  id={tc.id}&#8221;)        if hasattr(part, &#8220;tool_response&#8221;) and part.tool_response:            tr = part.tool_response            print(f&#8221;{prefix} [toolResponse]    type={tr.tool_type}  id={tr.id}&#8221;)        if hasattr(part, &#8220;executable_code&#8221;) and part.executable_code:            code = part.executable_code.code[:90].replace(&#8220;n&#8221;, &#8221; ↵ &#8220;)            print(f&#8221;{prefix} [executableCode]  {code}&#8230;&#8221;)        if hasattr(part, &#8220;code_execution_result&#8221;) and part.code_execution_result:            out = (part.code_execution_result.output or &#8220;&#8221;)[:90]            print(f&#8221;{prefix} [codeExecResult]  {out}&#8221;)        if hasattr(part, &#8220;function_call&#8221;) and part.function_call:            fc = part.function_call            fc_ids[fc.name] = fc.id            print(f&#8221;{prefix} [functionCall]    name={fc.name}  id={fc.id}&#8221;)            print(f&#8221;              └─ args: {dict(fc.args)}&#8221;)        if hasattr(part, &#8220;text&#8221;) and part.text:            snippet = part.text[:110].replace(&#8220;n&#8221;, &#8221; &#8220;)            print(f&#8221;{prefix} [text]            {snippet}&#8230;&#8221;)        if hasattr(part, &#8220;thought_signature&#8221;) and part.thought_signature:            print(f&#8221;              └─ thought_signature present ✓&#8221;)    return fc_ids     heading(&#8220;DEMO 1: Combine Google Search + Custom Function in One Request&#8221;)   print(&#8220;&#8221;&#8221; This demo shows the flagship new feature: passing BOTH a built-in tool (Google Search) and a custom function declaration in a single API call.   Gemini will:  Turn 1 → Search the web for real-time info, then request our custom           function to get weather data.  Turn 2 → We supply the function response; Gemini synthesizes everything.   Key points:  • google_search and function_declarations go in the SAME Tool object  • include_server_side_tool_invocations must be True (on ToolConfig)  • Return ALL parts (incl. thought_signatures) in subsequent turns &#8220;&#8221;&#8221;)   get_weather_func = types.FunctionDeclaration(    name=&#8221;getWeather&#8221;,    description=&#8221;Gets the current weather for a requested city.&#8221;,    parameters=types.Schema(        type=&#8221;OBJECT&#8221;,        properties={            &#8220;city&#8221;: types.Schema(                type=&#8221;STRING&#8221;,                description=&#8221;The city and state, e.g. Utqiagvik, Alaska&#8221;,            ),        },        required=[&#8220;city&#8221;],    ), )   print(&#8221;  Turn 1: Sending prompt with Google Search + getWeather tools&#8230;n&#8221;)   response_1 = client.models.generate_content(    model=TOOL_COMBO_MODEL,    contents=(        &#8220;What is the northernmost city in the United States? &#8221;        &#8220;What&#8217;s the weather like there today?&#8221;    ),    config=types.GenerateContentConfig(        tools=[            types.Tool(                google_search=types.GoogleSearch(),                function_declarations=[get_weather_func],            ),        ],        tool_config=types.ToolConfig(            include_server_side_tool_invocations=True,        ),    ), )   print(&#8221;   Parts returned by the model:n&#8221;) fc_ids = describe_parts(response_1)   function_call_id = fc_ids.get(&#8220;getWeather&#8221;) print(f&#8221;n    Captured function_call id for getWeather: {function_call_id}&#8221;)   print(&#8220;n  Turn 2: Returning function result &amp; requesting final synthesis&#8230;n&#8221;)   history = [    types.Content(        role=&#8221;user&#8221;,        parts=[            types.Part(                text=(                    &#8220;What is the northernmost city in the United States? &#8221;                    &#8220;What&#8217;s the weather like there today?&#8221;                )            )        ],    ),    response_1.candidates[0].content,    types.Content(        role=&#8221;user&#8221;,        parts=[            types.Part(                function_response=types.FunctionResponse(                    name=&#8221;getWeather&#8221;,                    response={&#8220;response&#8221;: &#8220;Very cold. 22°F / -5.5°C with strong Arctic winds.&#8221;},                    id=function_call_id,                )            )        ],    ), ]   response_2 = client.models.generate_content(    model=TOOL_COMBO_MODEL,    contents=history,    config=types.GenerateContentConfig(        tools=[            types.Tool(                google_search=types.GoogleSearch(),                function_declarations=[get_weather_func],            ),        ],        tool_config=types.ToolConfig(            include_server_side_tool_invocations=True,        ),    ), )   print(&#8221;    Final synthesized response:n&#8221;) for part in response_2.candidates[0].content.parts:    if hasattr(part, &#8220;text&#8221;) and part.text:        wrap(part.text)    we install the Google GenAI SDK, securely capture our API key, and define the helper functions that power the rest of the tutorial. We then demonstrate the flagship tool combination feature by sending a single request that pairs Google Search with a custom getWeather function, letting Gemini search the web for real-time geographic data and simultaneously request weather information from our custom tool. We complete the two-turn flow by returning our simulated weather response with the matching function call ID and watching Gemini synthesize both data sources into one coherent answer.    Copy CodeCopiedUse a different Browserheading(&#8220;DEMO 2: Tool Response IDs for Parallel Function Calls&#8221;)   print(&#8220;&#8221;&#8221; When Gemini makes multiple function calls in one turn, each gets a unique `id` field. You MUST return each function_response with its matching id so the model maps results correctly. This is critical for parallel calls. &#8220;&#8221;&#8221;)   time.sleep(2)   lookup_inventory = types.FunctionDeclaration(    name=&#8221;lookupInventory&#8221;,    description=&#8221;Check product inventory by SKU.&#8221;,    parameters=types.Schema(        type=&#8221;OBJECT&#8221;,        properties={            &#8220;sku&#8221;: types.Schema(type=&#8221;STRING&#8221;, description=&#8221;Product SKU code&#8221;),        },        required=[&#8220;sku&#8221;],    ), )   get_shipping_estimate = types.FunctionDeclaration(    name=&#8221;getShippingEstimate&#8221;,    description=&#8221;Get shipping time estimate for a destination zip code.&#8221;,    parameters=types.Schema(        type=&#8221;OBJECT&#8221;,        properties={            &#8220;zip_code&#8221;: types.Schema(type=&#8221;STRING&#8221;, description=&#8221;Destination ZIP code&#8221;),            &#8220;sku&#8221;: types.Schema(type=&#8221;STRING&#8221;, description=&#8221;Product SKU&#8221;),        },        required=[&#8220;zip_code&#8221;, &#8220;sku&#8221;],    ), )   print(&#8221;  Turn 1: Asking about product availability + shipping&#8230;n&#8221;)   resp_parallel = client.models.generate_content(    model=TOOL_COMBO_MODEL,    contents=(        &#8220;I want to buy SKU-A100 (wireless headphones). &#8221;        &#8220;Is it in stock, and how fast can it ship to ZIP 90210?&#8221;    ),    config=types.GenerateContentConfig(        tools=[            types.Tool(                function_declarations=[lookup_inventory, get_shipping_estimate],            ),        ],    ), )   fc_parts = [] for part in resp_parallel.candidates[0].content.parts:    if hasattr(part, &#8220;function_call&#8221;) and part.function_call:        fc = part.function_call        fc_parts.append(fc)        print(f&#8221;   [functionCall] name={fc.name}  id={fc.id}  args={dict(fc.args)}&#8221;)   print(&#8220;n  Turn 2: Returning results with matching IDs&#8230;n&#8221;)   simulated_results = {    &#8220;lookupInventory&#8221;: {&#8220;in_stock&#8221;: True, &#8220;quantity&#8221;: 342, &#8220;warehouse&#8221;: &#8220;Los Angeles&#8221;},    &#8220;getShippingEstimate&#8221;: {&#8220;days&#8221;: 2, &#8220;carrier&#8221;: &#8220;FedEx&#8221;, &#8220;cost&#8221;: &#8220;$5.99&#8221;}, }   fn_response_parts = [] for fc in fc_parts:    result = simulated_results.get(fc.name, {&#8220;error&#8221;: &#8220;unknown function&#8221;})    fn_response_parts.append(        types.Part(            function_response=types.FunctionResponse(                name=fc.name,                response=result,                id=fc.id,            )        )    )    print(f&#8221;   Responding to {fc.name} (id={fc.id}) → {result}&#8221;)   history_parallel = [    types.Content(        role=&#8221;user&#8221;,        parts=[            types.Part(                text=(                    &#8220;I want to buy SKU-A100 (wireless headphones). &#8221;                    &#8220;Is it in stock, and how fast can it ship to ZIP 90210?&#8221;                )            )        ],    ),    resp_parallel.candidates[0].content,    types.Content(role=&#8221;user&#8221;, parts=fn_response_parts), ]   resp_parallel_2 = client.models.generate_content(    model=TOOL_COMBO_MODEL,    contents=history_parallel,    config=types.GenerateContentConfig(        tools=[            types.Tool(                function_declarations=[lookup_inventory, get_shipping_estimate],            ),        ],    ), )   print(&#8220;n    Final answer:n&#8221;) for part in resp_parallel_2.candidates[0].content.parts:    if hasattr(part, &#8220;text&#8221;) and part.text:        wrap(part.text)    We declare two custom functions, lookupInventory and getShippingEstimate, and send a prompt that naturally triggers both in a single turn. We observe that Gemini assigns each function call a unique ID, which we carefully match when constructing our simulated responses for inventory availability and shipping speed. We then pass the complete history back to the model and receive a final answer that seamlessly combines both results into a customer-ready response.    Copy CodeCopiedUse a different Browserheading(&#8220;DEMO 3: Grounding with Google Maps — Location-Aware Responses&#8221;)   print(&#8220;&#8221;&#8221; Grounding with Google Maps connects Gemini to real-time Maps data: places, ratings, hours, reviews, and directions. Pass lat/lng for hyper-local results. Available on Gemini 2.5 Flash / 2.0 Flash (free). &#8220;&#8221;&#8221;)   time.sleep(2)   print(&#8221;  3a: Finding restaurants near a specific location&#8230;n&#8221;)   maps_response = client.models.generate_content(    model=MAPS_MODEL,    contents=&#8221;What are the best Italian restaurants within a 15-minute walk from here?&#8221;,    config=types.GenerateContentConfig(        tools=[types.Tool(google_maps=types.GoogleMaps())],        tool_config=types.ToolConfig(            retrieval_config=types.RetrievalConfig(                lat_lng=types.LatLng(latitude=34.050481, longitude=-118.248526),            )        ),    ), )   print(&#8221;   Generated Response:n&#8221;) wrap(maps_response.text)   if grounding := maps_response.candidates[0].grounding_metadata:    if grounding.grounding_chunks:        print(f&#8221;n   {&#8216;─&#8217; * 50}&#8221;)        print(&#8221;    Google Maps Sources:n&#8221;)        for chunk in grounding.grounding_chunks:            if hasattr(chunk, &#8220;maps&#8221;) and chunk.maps:                print(f&#8221;   • {chunk.maps.title}&#8221;)                print(f&#8221;     {chunk.maps.uri}n&#8221;)   time.sleep(2) print(f&#8221;n{&#8216;─&#8217; * 72}&#8221;) print(&#8221;  3b: Asking detailed questions about a specific place&#8230;n&#8221;)   place_response = client.models.generate_content(    model=MAPS_MODEL,    contents=&#8221;Is there a cafe near the corner of 1st and Main that has outdoor seating?&#8221;,    config=types.GenerateContentConfig(        tools=[types.Tool(google_maps=types.GoogleMaps())],        tool_config=types.ToolConfig(            retrieval_config=types.RetrievalConfig(                lat_lng=types.LatLng(latitude=34.050481, longitude=-118.248526),            )        ),    ), )   print(&#8221;   Generated Response:n&#8221;) wrap(place_response.text)   if grounding := place_response.candidates[0].grounding_metadata:    if grounding.grounding_chunks:        print(f&#8221;n    Sources:&#8221;)        for chunk in grounding.grounding_chunks:            if hasattr(chunk, &#8220;maps&#8221;) and chunk.maps:                print(f&#8221;   • {chunk.maps.title} → {chunk.maps.uri}&#8221;)   time.sleep(2) print(f&#8221;n{&#8216;─&#8217; * 72}&#8221;) print(&#8221;  3c: Trip planning with the Maps widget token&#8230;n&#8221;)   trip_response = client.models.generate_content(    model=MAPS_MODEL,    contents=(        &#8220;Plan a day in San Francisco for me. I want to see the &#8221;        &#8220;Golden Gate Bridge, visit a museum, and have a nice dinner.&#8221;    ),    config=types.GenerateContentConfig(        tools=[types.Tool(google_maps=types.GoogleMaps(enable_widget=True))],        tool_config=types.ToolConfig(            retrieval_config=types.RetrievalConfig(                lat_lng=types.LatLng(latitude=37.78193, longitude=-122.40476),            )        ),    ), )   print(&#8221;   Generated Itinerary:n&#8221;) wrap(trip_response.text)   if grounding := trip_response.candidates[0].grounding_metadata:    if grounding.grounding_chunks:        print(f&#8221;n    Sources:&#8221;)        for chunk in grounding.grounding_chunks:            if hasattr(chunk, &#8220;maps&#8221;) and chunk.maps:                print(f&#8221;   • {chunk.maps.title} → {chunk.maps.uri}&#8221;)      widget_token = getattr(grounding, &#8220;google_maps_widget_context_token&#8221;, None)    if widget_token:        print(f&#8221;n     Widget context token received ({len(widget_token)} chars)&#8221;)        print(f&#8221;   Embed in your frontend with:&#8221;)        print(f&#8217;   &#8216;)        print(f&#8217;   &#8216;)    We switch to gemini-2.5-flash and enable Grounding with Google Maps to run three location-aware sub-demos back-to-back. We query for nearby Italian restaurants using downtown Los Angeles coordinates, ask a detailed question about outdoor seating at a specific intersection, and generate a full-day San Francisco itinerary complete with grounding sources and a widget context token. We print every Maps source URI and title returned in the grounding metadata, showing how easy it is to build citation-rich, location-aware applications.    Copy CodeCopiedUse a different Browserheading(&#8220;DEMO 4: Full Agentic Workflow — Search + Custom Function&#8221;)   print(&#8220;&#8221;&#8221; This combines Google Search grounding with a custom booking function, all in one request. Context circulation lets the model use Search results to inform which function to call and with what arguments.   Scenario: &#8220;Find a trending restaurant in Austin and book a table.&#8221; &#8220;&#8221;&#8221;)   time.sleep(2)   book_restaurant = types.FunctionDeclaration(    name=&#8221;bookRestaurant&#8221;,    description=&#8221;Book a table at a restaurant.&#8221;,    parameters=types.Schema(        type=&#8221;OBJECT&#8221;,        properties={            &#8220;restaurant_name&#8221;: types.Schema(                type=&#8221;STRING&#8221;, description=&#8221;Name of the restaurant&#8221;            ),            &#8220;party_size&#8221;: types.Schema(                type=&#8221;INTEGER&#8221;, description=&#8221;Number of guests&#8221;            ),            &#8220;date&#8221;: types.Schema(                type=&#8221;STRING&#8221;, description=&#8221;Reservation date (YYYY-MM-DD)&#8221;            ),            &#8220;time&#8221;: types.Schema(                type=&#8221;STRING&#8221;, description=&#8221;Reservation time (HH:MM)&#8221;            ),        },        required=[&#8220;restaurant_name&#8221;, &#8220;party_size&#8221;, &#8220;date&#8221;, &#8220;time&#8221;],    ), )   print(&#8221;  Turn 1: Complex multi-tool prompt&#8230;n&#8221;)   agent_response_1 = client.models.generate_content(    model=TOOL_COMBO_MODEL,    contents=(        &#8220;I&#8217;m staying at the Driskill Hotel in Austin, TX. &#8221;        &#8220;Find me a highly-rated BBQ restaurant nearby that&#8217;s open tonight, &#8221;        &#8220;and book a table for 4 people at 7:30 PM today.&#8221;    ),    config=types.GenerateContentConfig(        tools=[            types.Tool(                google_search=types.GoogleSearch(),                function_declarations=[book_restaurant],            ),        ],        tool_config=types.ToolConfig(            include_server_side_tool_invocations=True,        ),    ), )   print(&#8221;   Returned parts:n&#8221;) fc_ids = describe_parts(agent_response_1) booking_call_id = fc_ids.get(&#8220;bookRestaurant&#8221;)   if booking_call_id:    print(f&#8221;n  Turn 2: Simulating booking confirmation&#8230;n&#8221;)      history_agent = [        types.Content(            role=&#8221;user&#8221;,            parts=[                types.Part(                    text=(                        &#8220;I&#8217;m staying at the Driskill Hotel in Austin, TX. &#8221;                        &#8220;Find me a highly-rated BBQ restaurant nearby that&#8217;s &#8221;                        &#8220;open tonight, and book a table for 4 people at 7:30 PM today.&#8221;                    )                )            ],        ),        agent_response_1.candidates[0].content,        types.Content(            role=&#8221;user&#8221;,            parts=[                types.Part(                    function_response=types.FunctionResponse(                        name=&#8221;bookRestaurant&#8221;,                        response={                            &#8220;status&#8221;: &#8220;confirmed&#8221;,                            &#8220;confirmation_number&#8221;: &#8220;BBQ-2026-4821&#8221;,                            &#8220;message&#8221;: &#8220;Table for 4 confirmed at 7:30 PM tonight.&#8221;,                        },                        id=booking_call_id,                    )                )            ],        ),    ]      agent_response_2 = client.models.generate_content(        model=TOOL_COMBO_MODEL,        contents=history_agent,        config=types.GenerateContentConfig(            tools=[                types.Tool(                    google_search=types.GoogleSearch(),                    function_declarations=[book_restaurant],                ),            ],            tool_config=types.ToolConfig(                include_server_side_tool_invocations=True,            ),        ),    )      print(&#8221;    Final agent response:n&#8221;)    for part in agent_response_2.candidates[0].content.parts:        if hasattr(part, &#8220;text&#8221;) and part.text:            wrap(part.text) else:    print(&#8220;n     Model did not request bookRestaurant — showing text response:n&#8221;)    for part in agent_response_1.candidates[0].content.parts:        if hasattr(part, &#8220;text&#8221;) and part.text:            wrap(part.text)    We combine Google Search with a custom bookRestaurant function to simulate a realistic end-to-end agent scenario set in Austin, Texas. We send a single prompt to Gemini, asking it to find a highly rated BBQ restaurant near the Driskill Hotel and book a table for four. We inspect the returned parts to see how the model first searches the web and then calls our booking function with the details it discovers. We close the loop by supplying a simulated confirmation response and letting Gemini deliver the final reservation summary to the user.    Copy CodeCopiedUse a different Browserheading(&#8220;DEMO 5: Context Circulation — Code Execution + Search + Function&#8221;)   print(&#8220;&#8221;&#8221; Context circulation preserves EVERY tool call and response in the model&#8217;s context, so later steps can reference earlier results.  Here we combine:  • Google Search (look up data)  • Code Execution (compute something with it)  • Custom function (save the result)   The model chains these tools autonomously using context from each step. &#8220;&#8221;&#8221;)   time.sleep(2)   save_result = types.FunctionDeclaration(    name=&#8221;saveAnalysisResult&#8221;,    description=&#8221;Save a computed analysis result to the database.&#8221;,    parameters=types.Schema(        type=&#8221;OBJECT&#8221;,        properties={            &#8220;title&#8221;: types.Schema(type=&#8221;STRING&#8221;, description=&#8221;Title of the analysis&#8221;),            &#8220;summary&#8221;: types.Schema(type=&#8221;STRING&#8221;, description=&#8221;Summary of findings&#8221;),            &#8220;value&#8221;: types.Schema(type=&#8221;NUMBER&#8221;, description=&#8221;Key numeric result&#8221;),        },        required=[&#8220;title&#8221;, &#8220;summary&#8221;, &#8220;value&#8221;],    ), )   print(&#8221;  Turn 1: Research + compute + save (3-tool chain)&#8230;n&#8221;)   circ_response = client.models.generate_content(    model=TOOL_COMBO_MODEL,    contents=(        &#8220;Search for the current US national debt figure, then use code execution &#8221;        &#8220;to calculate the per-capita debt assuming a population of 335 million. &#8221;        &#8220;Finally, save the result using the saveAnalysisResult function.&#8221;    ),    config=types.GenerateContentConfig(        tools=[            types.Tool(                google_search=types.GoogleSearch(),                code_execution=types.ToolCodeExecution(),                function_declarations=[save_result],            ),        ],        tool_config=types.ToolConfig(            include_server_side_tool_invocations=True,        ),    ), )   print(&#8221;   Parts returned (full context circulation chain):n&#8221;) fc_ids = describe_parts(circ_response) save_call_id = fc_ids.get(&#8220;saveAnalysisResult&#8221;)   if save_call_id:    print(f&#8221;n  Turn 2: Confirming the save operation&#8230;n&#8221;)      history_circ = [        types.Content(            role=&#8221;user&#8221;,            parts=[                types.Part(                    text=(                        &#8220;Search for the current US national debt figure, then use code &#8221;                        &#8220;execution to calculate the per-capita debt assuming a population &#8221;                        &#8220;of 335 million. Finally, save the result using the &#8221;                        &#8220;saveAnalysisResult function.&#8221;                    )                )            ],        ),        circ_response.candidates[0].content,        types.Content(            role=&#8221;user&#8221;,            parts=[                types.Part(                    function_response=types.FunctionResponse(                        name=&#8221;saveAnalysisResult&#8221;,                        response={&#8220;status&#8221;: &#8220;saved&#8221;, &#8220;record_id&#8221;: &#8220;analysis-001&#8243;},                        id=save_call_id,                    )                )            ],        ),    ]      circ_response_2 = client.models.generate_content(        model=TOOL_COMBO_MODEL,        contents=history_circ,        config=types.GenerateContentConfig(            tools=[                types.Tool(                    google_search=types.GoogleSearch(),                    code_execution=types.ToolCodeExecution(),                    function_declarations=[save_result],                ),            ],            tool_config=types.ToolConfig(                include_server_side_tool_invocations=True,            ),        ),    )      print(&#8221;    Final response:n&#8221;)    for part in circ_response_2.candidates[0].content.parts:        if hasattr(part, &#8220;text&#8221;) and part.text:            wrap(part.text) else:    print(&#8220;n     Model completed without requesting saveAnalysisResult.&#8221;)    for part in circ_response.candidates[0].content.parts:        if hasattr(part, &#8220;text&#8221;) and part.text:            wrap(part.text)     heading(&#8221; ALL DEMOS COMPLETE&#8221;) print(&#8220;&#8221;&#8221;   Summary of what you&#8217;ve seen:     1. Tool Combination   — Google Search + custom functions in one call   2. Tool Response IDs  — Unique IDs for parallel function call mapping   3. Maps Grounding     — Location-aware queries with real Maps data   4. Agentic Workflow   — Search + booking function with context circulation   5. Context Circulation — Search + Code Execution + custom function chain     Key API patterns:   ┌──────────────────────────────────────────────────────────────────┐   │  tools=[types.Tool(                                             │   │      google_search=types.GoogleSearch(),                        │   │      code_execution=types.ToolCodeExecution(),                  │   │      function_declarations=[my_func],                           │   │  )]                                                             │   │                                                                 │   │  tool_config=types.ToolConfig(                                  │   │      include_server_side_tool_invocations=True,                 │   │  )                                                              │   │                                                                 │   └──────────────────────────────────────────────────────────────────┘     Models:   • Tool combination:  gemini-3-flash-preview (Gemini 3 only)   • Maps grounding:    gemini-2.5-flash / gemini-2.5-pro / gemini-2.0-flash   • Both features use the FREE tier with rate limits.     Docs:   &#8220;&#8221;&#8221;)    We push context circulation to its fullest by chaining three tools, Google Search, Code Execution, and a custom saveAnalysisResult function, in a single request that researches the US national debt, computes the per-capita figure, and saves the output. We inspect the full chain of returned parts, toolCall, toolResponse, executableCode, codeExecutionResult, and functionCall, to see exactly how context flows from one tool to the next across a single generation. We wrap up by confirming the save operation and printing a summary of every key API pattern we have covered across all five demos.    In conclusion, we now have a practical understanding of the key patterns that power agentic workflows in the Gemini API. We see that the include_server_side_tool_invocations flag on ToolConfig is the single switch that unlocks tool combination and context circulation, which returns all parts, including thought_signature fields, verbatim in our conversation history, is non-negotiable for multi-turn flows, and that matching every function_response.id to its corresponding function_call.id is what keeps parallel execution reliable. We also see how Maps grounding opens up an entire class of location-aware applications with just a few lines of configuration. From here, we encourage extending these patterns by combining URL Context or File Search with custom functions, wiring real backend APIs in place of our simulated responses, or building conversational agents that chain dozens of tools across many turns.        Check out the Full Codes here.  Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.    Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us The post How to Combine Google Search, Google Maps, and Custom Functions in a Single Gemini API Call With Context Circulation, Parallel Tool IDs, and Multi-Step Agentic Chains appeare <a href="https://gadgets.indirootsandroutes.com/how-to-combine-google-search-google-maps-and-custom-functions-in-a-single-gemini-api-call-with-context-circulation-parallel-tool-ids-and-multi-step-agentic-chains/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">78642ec6eb07c5b53bb7d3b8569ec272</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9704</link>
				<pubDate>Wed, 08 Apr 2026 00:15:31 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/desalination-plants-in-the-middle-east-are-increasingly-vulnerable/" rel="nofollow ugc">Desalination plants in the Middle East are increasingly vulnerable</a></strong>MIT Technology Review Explains: Let our writers untangle the complex, messy <a href="https://gadgets.indirootsandroutes.com/desalination-plants-in-the-middle-east-are-increasingly-vulnerable/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
					<item>
				<guid isPermaLink="false">859ba159c9478221a9f4d0b2e1871248</guid>
				<title>admin wrote a new post</title>
				<link>https://gadgets.indirootsandroutes.com/?p=9703</link>
				<pubDate>Wed, 08 Apr 2026 00:07:01 +0000</pubDate>

									<content:encoded><![CDATA[<p><strong><a href="https://gadgets.indirootsandroutes.com/enabling-agent-first-process-redesign/" rel="nofollow ugc">Enabling agent-first process redesign</a></strong><a href="https://gadgets.indirootsandroutes.com/enabling-agent-first-process-redesign/" rel="nofollow ugc"><img loading="lazy" src="https://gadgets.indirootsandroutes.com/wp-content/uploads/2026/04/Deloitte-Graphic.png" /></a> Unlike static, rules-based systems, AI agents can learn, adapt, and optimize processes dynamically. As they <a href="https://gadgets.indirootsandroutes.com/enabling-agent-first-process-redesign/" rel="nofollow ugc"><span>[&hellip;]</span></a></p>
]]></content:encoded>
				
				
							</item>
		
	</channel>
</rss>