WordPress.org blog: WordPress 6.9.3 and 7.0 beta 4

WordPress 6.9.2 was released earlier today and addressed 10 security issues.

A few users have subsequently reported an issue where the front end of their site was appearing blank after updating to 6.9.2. The issue has been narrowed down to some themes using an unusual approach to loading template files via “stringable objects” instead of primitive strings for file paths.

Although this is is not an officially supported approach to loading template files in WordPress (the template_include filter only accepts a string), it nevertheless caused some sites to break. As a result, the Security Team has decided to address this in a fast follow 6.9.3 release.

As always, it is recommended that you update your sites to the latest version of WordPress immediately. This ensures your site is protected by all available security fixes in 6.9.2 and that you will not be affected by the bug fixed in 6.9.3.

Many thanks to those who reported the issue, assisted in narrowing down the problem, and helped with the fix.

You can download WordPress 6.9.3 from WordPress.org, or visit your WordPress Dashboard, click “Updates”, and then click “Update Now”. If you have sites that support automatic background updates, the update process will begin shortly. You don’t have to do a thing!

For more information on WordPress 6.9.3, please visit the version page on the HelpHub site.

WordPress 7.0 beta 4

The next major release of WordPress will be version 7.0, which is planned for April 9, 2026. The Security Team has decided to package a new beta release (7.0 beta 4) to keep everyone protected from the patched vulnerabilities, including the dedicated members of the community focusing their time and effort on testing the upcoming release.

This will be an additional beta release in the 7.0 release cycle. The schedule will remain the same going forward, but with five total beta releases instead of the previously planned four. The next 7.0 beta release is still scheduled for Thursday, March 12th.

This beta version of the WordPress software is still under development. Please do not install, run, or test WordPress 7.0 beta versions on production or mission-critical websites. Instead, you should evaluate Beta 4 on a test server and site.

Plugin Install and activate the WordPress Beta Tester plugin on a WordPress install. (Select the “Bleeding edge” channel and “Beta/RC Only” stream.)
Direct Download Download the Beta 4 version (zip) and install it on a WordPress website.
Command Line Use this WP-CLI command:
wp core update --version=7.0-beta4
WordPress Playground Use the WordPress Playground instance to test the software directly in your browser. No setup is required – just click and go!

Beta 4 updates and highlights

WordPress 7.0 Beta 4 contains the ten security patches shipped in WordPress 6.9.2, and more than 49 updates and fixes since the Beta 3 release, including 14 in the Editor and 35 in Core. 

Each beta cycle focuses on bug fixes. More are on the way, thanks to your help with testing. You can browse the technical details for all issues addressed since Beta 3 at these links:

As always, a successful release depends on your confirmation during testing. So please download and test!

Props @peterwilson, @desrosj, @marybaum, @amykamala for peer reviewing.

As Open Models Spark AI Boom, NVIDIA Jetson Brings It to Life at the Edge

The Cat 306 CR mini-excavator weighs just under eight tons and fits inside a standard shipping container. It’s the machine a contractor rents when the job site is tight: a utility trench near a foundation, a basement dig in a dense neighborhood.

The cab is roughly the size of a phone booth. The operator sits close to the controls, two joysticks, multiple functions per hand. It takes time to learn. It takes longer to speed up.

At CES earlier this year, that machine answered questions.

In the demo, the Cat AI Assistant ran on NVIDIA Jetson Thor, an edge AI platform built for real‑time inference in industrial and robotic systems, NVIDIA Nemotron speech models are used for fast and accurate natural voice interactions. Qwen3 4B, served locally via vLLM, interprets requests and generates responses with low latency, no cloud link required.

Beyond enterprise innovation, open models unlock new possibilities for developers to build and experiment freely. Running OpenClaw on NVIDIA Jetson enables developers to create private, always-on AI assistants at the edge — with zero application programming interface cost and full data privacy.

All Jetson developer kits support OpenClaw, offering the flexibility to switch across open models from 2 billion parameters to 30 billion. With a frontier-class AI assistant running locally, users can power morning briefings, automate daily tasks, perform code reviews and control smart home systems — all in real time.

From the Cloud to the Edge

For most of their recent history, open models lived where it was easiest to support them. 

They ran in data centers, backed by elastic compute and persistent networks. Cloud deployments carry costs in latency and ongoing compute spend that scale with every query.

Physical systems optimize for something else. Low latency because machines interact with people and environments. Limited power because devices have hard limits. And consistent behavior because variability introduces risk.

There’s also a supply question. Memory shortages have driven up costs across the industry. Jetson brings compute and memory together in a system-on-module, accelerating customer hardware design and making sourcing and validation easier than with discrete component approaches.

And as models have grown more efficient, developers have also started asking a different question. Not which model performs best in isolation, but where it makes sense to run. 

More often, the answer is on the device, starting from Jetson Orin Nano 8GB for entry-level generative AI models.  

Building Autonomous Physical AI Systems at Scale

For physical AI systems, generative AI models are expanding what’s possible. 

Caterpillar’s in-cab Cat AI Assistant, which is in development, runs speech and language models locally alongside trusted machine context, supporting operator guidance and safety features.

At CES, Franka Robotics showed what that looks like in robotics. The company’s FR3 Duo dual-arm system ran the NVIDIA GR00T N1.6 model end-to-end onboard, perception to motion, no task scripting. The policy executes locally.

In robotics research, the SONIC project from NVIDIA’s GEAR Lab trains a humanoid controller on over 100 million frames of motion-capture data, then deploys the resulting policy on a physical robot where the kinematic planner runs on Jetson Orin at around 12 milliseconds per pass. The policy loop runs at 50 Hz. Everything executes onboard.

The pattern reaches into the developer community. A team from UIUC’s SIGRobotics club built a dual-arm matcha-making robot on Jetson Thor running the GR00T N1.5 model. It took first place at an NVIDIA embodied AI hackathon.

This research momentum continues at the NYU Center for Robotics and Embodied Intelligence. The group recently ran its YOR robot on Jetson Thor, using NVIDIA Blackwell compute to handle the heavy processing required for AI-driven movement. Early results show YOR performing intricate pick-and-place tasks with better generalization to new objects and robustness to scene variation, accelerating readiness for a wide range of household tasks like cooking and laundry.

Independent researchers are finding the same. Andrés Marafioti, a multimodal research lead at Hugging Face, built an agentic AI system on Jetson AGX Orin that routes tasks across models and schedules its own work. Late one night, the agent sent him a message: Go to sleep. Everything will be ready by morning.

Developer Ajeet Singh Raina from the Collabnix community has shown how to run OpenClaw on NVIDIA Jetson Thor for a personal AI assistant that runs 24/7. This setup allows for private large language model inference for the user’s own data while the system manages emails and calendars through a local gateway.

Jetson Is the New Standard

NVIDIA Jetson has become a common platform for running open models at the edge.

It supports a wide range of open models and AI frameworks, giving developers flexibility for almost any generative AI workload at the edge. 

Model benchmarks are available at Jetson AI Lab, along with tutorials from the open model community. Jetson Thor delivers leadership inference performance across all major generative AI models.

Gemma: Built on Google’s Gemini research, Gemma 3 is a versatile workhorse for Jetson. It is multimodal out of the box, which means it can see and talk in over 140 languages. On Jetson Thor, it handles a massive 128K context window. This makes it perfect for robots that need to remember a long list of complex or multistep instructions.

gpt-oss-20B: This model from OpenAI lowers the barrier to deploying advanced AI by delivering near state-of-the-art reasoning performance in a model that can run locally on Jetson Thor and Orin for cost-efficient inference. 

Mistral AI: The new Mistral 3 open model family delivers industry-leading accuracy, efficiency and customization capabilities for developers and enterprises. This family includes small, dense models ranging from 3B to 14B, fast and remarkably smart for their size. Jetson developers can use the vLLM container on NVIDIA Jetson Thor to achieve 52 tokens per second for single concurrency, with scaling up to 273 tokens per second with concurrency of eight.

NVIDIA Cosmos: This leading, open, reasoning vision language model enables robots and AI agents to see, understand and act in the physical world like humans. Both the 8B and 2B models run on Jetson to deliver advanced spatial-temporal perception and reasoning capabilities. 

NVIDIA Isaac GR00T N1.6 is an open vision language action model (VLA) for generalist robot skills. Developers can use it to build robots that perceive their environment, reason about instructions and act across a wide range of tasks, environments and embodiments. On Jetson Thor, the full GR00T N1.6 pipeline executes onboard, delivering real-time perception, spatial awareness and responsive action.

NVIDIA Nemotron: A family of open models, datasets and technologies that empower users to build efficient, accurate and specialized agentic AI systems. It’s designed for advanced reasoning, coding, visual understanding, agentic tasks, safety, speech and information. The Nemotron 3 Nano 9B model effectively runs on Jetson Orin Nano Super with llama.cpp with 9 tokens per second performance. 

PI 0.5: A VLA model from Physical Intelligence that enables robots to understand instructions and autonomously execute complex real-world tasks with strong generalization and real-time adaptability, while NVIDIA Jetson Thor delivers 120 action tokens per second to power responsive, low-latency physical AI deployment.

Qwen 3.5: This family of models from Alibaba, including the latest Qwen 3.5 releases, offers a mix of dense and mixture‑of‑experts models that deliver strong reasoning, coding multimodal understanding and long‑context performance. Jetson Thor delivers optimized performance across Qwen models like the Qwen 3.5-35B-A3B model, which reasons at 35 tokens per second, making real-time interactivity possible. 

Any developer can fine-tune these models to create specialized physical AI agents and seamlessly deploy them into physical AI systems. The NVIDIA Jetson platform supports popular AI frameworks from NVIDIA TRT, Llama.cpp, Ollama, vLLM, SGLang and more.

Take On Open Models on Jetson

Developers can dive into Hugging Face tutorials — including Deploying Open Source Vision Language Models on Jetson — and catch the latest livestream. Learn from this tutorial and run OpenClaw on NVIDIA Jetson.

Join GTC 2026 next month to see it all in action. NVIDIA will show how open models are moving from data centers into machines operating in the physical world, including in a  panel on the Future of Industrial Autonomy.

Watch the GTC keynote from NVIDIA founder and CEO Jensen Huang and explore physical AI, robotics and vision AI sessions.

WordPress.org blog: WordPress 6.9.2 Release

WordPress 6.9.2 is now available!

This is a security release that features several fixes.

Because this is a security release, it is recommended that you update your sites immediately.

You can download WordPress 6.9.2 from WordPress.org, or visit your WordPress Dashboard, click “Updates”, and then click “Update Now”. If you have sites that support automatic background updates, the update process will begin automatically.

The next major release will be version 7.0, which is planned for April 9th, 2026.

For more information on WordPress 6.9.2, please visit the version page on the HelpHub site.

Security updates included in this release

The security team would like to thank the following people for responsibly reporting vulnerabilities, and allowing them to be fixed in this release:

  • A Blind SSRF issue reported by sibwtf, and subsequently by several other researchers while the fix was being worked on
  • A PoP-chain weakness in the HTML API and Block Registry reported by Phat RiO
  • A regex DoS weakness in numeric character references reported by Dennis Snell of the WordPress Security Team
  • A stored XSS in nav menus reported by Phill Savage
  • An AJAX query-attachments authorization bypass reported by Vitaly Simonovich
  • A stored XSS via the data-wp-bind directive reported by kaminuma
  • An XSS that allows overridding client-side templates in the admin area reported by Asaf Mozes
  • A PclZip path traversal issue reported independently by Francesco Carlucci and kaminuma
  • An authorization bypass on the Notes feature reported by kaminuma
  • An XXE in the external getID3 library reported by Youssef Achtatal

The WordPress security team have worked with the maintainer of the external getID3 library, James Heinrich, to coordinate a fix to getID3. A new version of getID3 is available here.

As a courtesy, these fixes are being backported, where necessary, to all branches eligible to receive security fixes (currently through 4.7). As a reminder, only the most recent version of WordPress is actively supported. The backports are in progress and will ship as they become ready.

Thank you to these WordPress contributors

This release was led by John Blackbourn. In addition to the security researchers mentioned above, WordPress 6.9.2 would not have been possible without the contributions of the following people: Dennis Snell, Alex Concha, Jon Surrell, Isabel Brison, Peter Wilson, Jonathan Desrosiers, Jb Audras, Luis Herranz, Aaron Jorbin, Weston Ruter, and Dominik Schilling.

NVIDIA Virtualizes Game Development With RTX PRO Server

Game development teams are working across larger worlds, more complex pipelines and more distributed teams than ever. At the same time, many studios still rely on fixed, desk-bound GPU hardware for critical production work.

At the Game Developers Conference (GDC) this week in San Francisco, NVIDIA is showcasing a new approach to bring together disparate workflows using virtualized game development on NVIDIA RTX PRO Servers, powered by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and NVIDIA vGPU software.

With the RTX PRO Server, studios can centralize and virtualize core workflows across creative, engineering, AI research and quality assurance (QA) — all on shared GPU infrastructure in the data center. 

This enables teams to maintain the responsiveness and visual fidelity they expect from workstation-class systems while improving infrastructure utilization, scalability, data security and operational consistency across teams and locations.

Simplifying Complex Workflows

As game development studios scale, hardware can often sit underutilized in one location while other teams wait to access it for production work. QA capacity is hard to expand quickly. Over time, workstation hardware, drivers and tools diverge, making bugs harder to reproduce. AI workloads are often isolated on separate infrastructure, creating more operational overhead. 

The NVIDIA RTX PRO Server helps studios move from workstation-by-workstation scaling to centralized GPU infrastructure. Studios can pool resources, allocate performance by workload and support parallel development, testing and AI workflows without expanding physical workstation sprawl.

Centralized GPU infrastructure enables studios to run AI training, simulation and game automation workloads overnight, then dynamically reallocate the same resources to interactive development during the day, improving overall utilization and reducing idle capacity.

The NVIDIA RTX PRO Server supports virtualized workflows for 3D graphics and AI across the game development lifecycle for:

  • Artists: Providing virtual RTX workstations for traditional 3D and generative AI content-creation workflows.
  • Developers: Powering consistent, high-performance engineering environments for coding and 3D development.
  • AI researchers: Offering large-memory GPU profiles for fine-tuning, inference and AI agents.
  • QA teams: Enabling scalable game validation and performance testing using the same NVIDIA Blackwell architecture used by GeForce RTX 50 Series GPUs.

This allows studios to support multiple teams — including across sites and contractors — on one common GPU platform, improving collaboration and reducing debugging issues that can arise from disparate hardware.

Supporting AI and Engineering on Shared Infrastructure

AI is becoming a core part of everyday game development, spanning coding, content creation, testing and live operations. As these workflows expand, studios need infrastructure that can support AI alongside traditional graphics workloads without introducing separate, siloed systems.

With the RTX PRO Server, studios can support coding agents, internal model experimentation and AI-assisted production workflows without spinning up a separate AI stack for every team.

The NVIDIA RTX PRO 6000 Blackwell Server Edition GPU features a massive 96GB memory buffer, enabling developers to run multiple demanding applications simultaneously while supporting AI inference on larger models directly alongside real-time graphics workflows.

NVIDIA Multi-Instance GPU (MIG) technology partitions a single GPU into isolated instances with dedicated memory, compute and cache resources. Combined with NVIDIA vGPU software, MIG can help studios securely allocate GPU capacity across users and workloads. In combined MIG and vGPU configurations, a single RTX PRO 6000 Blackwell Server Edition GPU can support up to 48 concurrent users, maximizing utilization while maintaining performance isolation.

Enterprise-Ready Deployment for Game Studios

NVIDIA RTX PRO Servers are designed for enterprise-grade data-center operations. Studios can deploy virtual workstations on RTX PRO Servers via NVIDIA vGPU on supported hypervisor and remote workstation platforms.

That means RTX PRO Servers can fit into studios’ existing infrastructure and IT practices, rather than requiring one-off deployments.

Major game publishers already use NVIDIA vGPU technology to scale centralized development infrastructure and improve efficiency at studio scale.

Learn more about the NVIDIA RTX PRO Server.

See these workflows live by joining NVIDIA’s booth 1426 at GDC or attending NVIDIA GTC, running March 16-19 in San Jose, California. 

See notice regarding software product information.

NVIDIA and ComfyUI Streamline Local AI Video Generation for Game Developers and Creators at GDC

Game developers and artists are building cinematic worlds and iconic characters — raising the bar for immersive experiences on NVIDIA RTX AI PCs

At the Game Developers Conference (GDC) in San Francisco this week, NVIDIA announced a suite of updates that streamline AI video generation for concept development and storyboarding on RTX GPUs and the NVIDIA DGX Spark desktop supercomputer.

These announcements include:

  • ComfyUI’s new App View with a simplified interface, lowering the barrier for entry for the popular generative AI tool.
  • RTX Video Super Resolution available for ComfyUI, a real-time 4K upscaler ideal for video generation — also available for developers as a Python Wheel.
  • NVFP4 and FP8 model variants are available today for FLUX.2 Klein, with NVFP4 support for LTX-2.3 coming soon, delivering up to 2.5x performance gains and 60% lower memory usage for both models.

Frictionless Local AI: Collaborate, Optimize, Customize

Many of today’s popular AI applications are making it easier for beginners to try state-of-the-art models directly on their laptop or desktop.

For artists unfamiliar with node graphs, ComfyUI’s new App View presents workflows in a simplified interface. Users only need to enter a prompt, adjust simple parameters and hit generate. The full node-based experience remains available as Node View, and users can seamlessly switch between the two modes.

App View is compatible with the RTX optimizations in ComfyUI. Performance for RTX GPUs is 40% faster since September, and ComfyUI now supports NVFP4 and FP8 data formats natively. All combined, performance is 2.5x faster and VRAM is reduced by 60% with NVIDIA GeForce RTX 50 Series GPUs’ NVFP4 format, and performance is 1.7x faster and VRAM is reduced by 40% with FP8.

At CES in January, NVIDIA announced several models released with NVFP4 and FP8 support. And now more NVFP4 and FP8 models are available — LTX-2.3, with NVFP4 support coming soon, FLUX.2 Klein 4B, and FLUX.2 Klein 9B directly in ComfyUI. To get started, download the NVFP4 and FP8 checkpoints directly from Hugging Face, load the default workflows in ComfyUI via the Template Browser and replace the default model checkpoint with the newly downloaded checkpoint. 

App View mode is available today. Learn more on ComfyUI

Faster 4K Video Generation 

Getting high-quality video outputs often means juggling three constraints: speed, VRAM and control. While many artists ultimately want 4K quality, most prefer to generate smaller, faster previews first, and then upscale them. Today’s upscalers take minutes to upscale a 10‑second clip into 4K resolution.

Now, users can quickly upscale generated video to 4K with NVIDIA RTX Video Super Resolution, available as a node for ComfyUI. RTX Video can be accessed as a standalone node for building video workflows from scratch.

For AI developers, NVIDIA released a free Python package available via the PyPI repository, along  with sample code on GitHub and a VFX Python bindings guide, to get started quickly. The package provides programmatic access to the same AI upscaling technology that powers RTX Video, running directly on RTX GPU Tensor Cores to deliver 4K upscaling 30x faster than alternative popular local upscalers, and at a fraction of the VRAM cost. The package is powered by the NVIDIA Video Effects software development kit.

Generative AI model performance for LTX-2 and FLUX.2 Klein 9B on an NVIDIA RTX 5090 GPU. Performance testing done on RTX 5090. LTX-2: 512×768 resolution, 100 frames, 20 steps. FLUX.2 Klein 9B (base): 1024×1024 resolution, 20 steps.

Ready to get started with ComfyUI? Check out the latest NVIDIA Studio Sessions tutorial hosted by  visual effects artist Max Novak for a guided walkthrough:

#ICYMI: The Latest Updates for RTX AI PCs at GDC

🎉Join NVIDIA at GTC, March 16-19 in San Jose! Check out “Create Generative AI Workflow for Design and Visualization in ComfyUI” on March 17, for a training session led by NVIDIA 3D workflow specialists focused on building RTX-accelerated generative workflows for images, video, 3D, and PBR materials. Register today and explore the session catalog.

💡LTX Desktop is a fully local, open-source video editor running directly on the LTX engine, optimized for NVIDIA GPUs and compatible hardware.

🦥 LM Link connects separate devices running LM Studio, allowing models to run on remote machines as if they were local. It’s ideal for users wanting to run an agent on their laptop while still accessing free and private AI, powered by their DGX Spark or RTX desktop. Learn how to run LM Studio on DGX Spark.

🎮On Tuesday, March 31, as part of the next opt-in NVIDIA App beta, overrides for NVIDIA DLSS 4.5 Dynamic Multi Frame Generation and DLSS 4.5 Multi Frame Generation 6x Mode will be released for GeForce RTX 50 Series owners. Learn about NVIDIA news at GDC.

🤖Next month, a new NVIDIA RTX Remix update will introduce Advanced Particle VFX, enabling modders to create a wide array of particle effects that further improve image quality, detail and immersion.

🦄Topaz Labs has collaborated with NVIDIA to optimize NeuroStream for NVIDIA GPUs — a proprietary VRAM optimization that allows complex AI models to run on consumer hardware.

📃Microsoft has introduced support for VoiceMod, one of the first apps to enable Windows ML for GPU inference, significantly improving performance voice quality compared with CPUs. 

Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X

See notice regarding software product information.

Computers, networking, games, people management, call centers, reporting & whatever.