Featured
Reports
Scott Gutterman from the PGA TOUR discusses the new Studios and the impact on fan experience
Zeus Kerravala and Scott Gutterman, SVP of Digital and Broadcast Technologies discuss the expansion of the PGA TOUR Studios from […]
Phillipe Dore, CMO of BNP Paribas Tennis Tournament talks innovation
April 2025 // Zeus Kerravala from ZK Research interviews Philippe Dore, CMO of the BNP Paribas tennis tournament. Philippe discusses […]
Nathan Howe, VP of Global Innovation at Zscaler talks mobile security
March 2025 // Zeus Kerravala from ZK Research interviews Nathan Howe, VP of Global Innovation at Zscaler, about their new […]
Check out
OUR NEWEST VIDEOS
2026 ZKast #10 - How AI & Wi-Fi 7 are Revolutionizing the Retail Experience | NRF 2026
1 views 4 hours ago
0 0
2026 ZKast #9 - Why Zoom is Embedding Everywhere: From Drones to Robotics | Brendan Ittelson CES26
5.8K views 8 hours ago
1 0
2026 ZKast #8 - Four Cyber Security Predictions From Will and Zeus
4.8K views January 21, 2026 5:32 pm
0 0
Recent
ZK Research Blog
News
I’m a big fan of any technology that makes our lives easier. One example of this is Amazon’s Just Walk Out technology, which I consider to be the easiest check out experience available today. Customers tap their credit card on a reader, walk in a store, pick up whatever they want and then, as the name suggests, just walk out of the store and everything is charged to your account.
One of my goals at AWS re:Invent was to find what’s new with Just Walk Out and what to look forward to. At the event, I met with both Rajiv Chopra, vice president of JWO for AWS and Sarah Yacoub, senior manager of product marketing, to get an update.
Here are some of the key updates to Just Walk Out:
Technology deployment and infrastructure improvements
- Shift to “lane approach”: Instead of a full store retrofit with cameras covering the entire space, the current stadium deployments are using a “lane of cameras” effectively placed outside the concession area. This significantly reduces the infrastructure size, camera count, and build-out complexity, making it easier to attach to existing structures. It also obviates the need to do large-scale construction to deploy a Just Walk Out store.
- Reduced infrastructure footprint: Improvements have been made in the size of the additional MDF and backroom area required, leading to a smaller physical footprint.
- Cost reductions through technology optimization: Over the past few years, Just Walk Out has reduced deployment costs by approximately 50% through a combination of technology improvements and operational efficiencies. The AI algorithms have become exponentially more efficient, now handling variable ceiling heights (as low as six to seven feet), sloped floors, and inconsistent ceilings without requiring expensive general contracting changes. Installation has been simplified through retrofitting capabilities that reuse existing fixtures, gate plates that eliminate the need to core into cement (reducing permitting requirements) and streamlined camera plans requiring less low voltage wiring. These improvements reduce not just Just Walk Out costs, but total deployment costs including general contractor, electrician and designer expenses — making the technology more accessible across verticals.
Operational models and experience
- Just Walk Out becomes “just walk in”: In some travel locations, like Hudson Nonstop, the requirement to tap a credit card to enter was viewed by some consumers as a barrier to entry. This has now been removed by enabling the shopper to enter freely, browse, select items and the checkout/payment process happens at the exit instead of the traditional tap-to-enter model. This aims to reduce customer apprehension at the entrance but still results in the same frictionless experience.
- Shrink and loss prevention: Loss prevention is a huge metric for Just Walk Out. During my conversation, Yacoub mentioned that retailers using the technology have seen double-digit percentage decreases in loss, making it a significantly better solution compared to manned self-checkout, which is often subject to being tricked. The cameras see every activity from consumers but also acts as a psychological deterrent. For stores with high level of theft, such as CVS and Target could benefit greatly from Just Walk Out and presents a much better alternative than locking up merchandise. Cost has been used as an excuse by retailers, but the amount of loss would far outweigh the cost of Just Walk Out.
Market expansion and adoption
- Global presence: Just Walk Out, initially launched in the U.S., is now available in Canada, Australia, the U.K. and France, with more countries to come.
- Store count: The company is currently quoting over 300 locations and expects multi-store deals to continue increasing this number significantly in 2026.
- New verticals and value propositions:
- Stadiums: This continues to be a focus for Just Walk Out with many new stadiums being turned up in the U.S. and internationally, including Allianz Stadium in the U.K., Marvel Stadium in Australia — the first Just Walk Out store in the Southern Hemisphere — as well as Rod Laver Arena and Melbourne Cricket Ground in Australia, and venues in Canada such as Scotiabank Arena in Toronto and Scotiabank Saddledome in Calgary.
- Fulfillment centers/business and industry: Deploying Just Walk Out inside fulfillment centers, offices, and large factories to provide a 24/7 amenity to employees who have limited break times and no options nearby (food deserts).
- EV charging stations: Used at EV charging stations like Gridserve’s Electric Forecourts in the U.K. and IONNA’s Rechargery locations for convenience stores within rest areas, offering an unmanned space and a differentiator for the charging network.
- Healthcare: Deployments in hospitals for gift stores and convenience stores, often with badge pay integration for night shift staff, such as the University of California at San Diego Health’s McGrath Outpatient Pavilion.
- Universities: More than 60 locations are deployed, using meal dollars integration and offering specialized late-night selections (ice cream, snacks) in dorm residence halls, including UC San Diego with five campus stores.
Data and integration
- Loyalty integration: One of the biggest inhibitors to historical adoption was that Just Walk Out did not integrate with loyalty programs such as season ticket holder apps. A couple of years ago, one of the NHL team chief information officers mentioned that if he can’t let his best customers use the most convenient way to buy products, he didn’t want it. Chopra mentioned that this is no longer an issue and Just Walk Out can integrate with almost all loyalty and payment programs.
- Real-time access data: The other significant issue with Just Walk Out was there was no access to real time information. An NFL CIO explained to me that closest he could get to “real time,” was the data being stored in an Amazon S3 bucket the next day. Many retailers, which includes stadiums, like to have real time count of exactly what has been sold and Just Walk Out could not accommodate that. This issue has now been solved and fully integrates into inventory systems.
The biggest remaining issue with Just Walk Out is just consumer education and awareness. In some cases, the Just Walk Out brand isn’t front and center and takes a back seat to the store or a sponsor. This is common in stadiums and airports. When one flies into Harry Reid Airport in Las Vegas, the Hudson News location at the bottom of the escalator is a Just Walk Out-enabled store. Another example is at the Golden 1 Center, home of the Sacramento Kings, the store is branded, “PATH Grab and Go,” after the sponsor.
The issue with this is it’s very common for stadiums to have multiple grab-and-go systems and, though the experience is similar, there is a difference between Just Walk Out and competitors such as Ai-Fi and Zippin. The third-party branding leaves it up to the consumer to understand which system is in place and what the experience is.
The other challenge is just general awareness. I’ll often go to a stadium and see a line of people waiting in a regular checkout with only a handful of fans at a Just Walk Out store. Once consumers use it, they’ll generally use it again as the experience is so easy. This is where retailers, stadium owners and others should invest in some kind of “concierge,” that can help educate consumers.
Just Walk Out has grown in both terms of capabilities and deployment models and will be the future of retail. With advancements in AI and camera vision, self-service models can be fast and accurate and a “win” for customers, which means a win for the retailer.
Artificial intelligence leader Nvidia Corp. Monday announced the Nemotron-3 family of models, data and tools, and the release is further evidence of the company’s commitment to the open ecosystem, focusing on delivering highly efficient, accurate and transparent models essential for building sophisticated agentic AI applications.
Nvidia executives, including Chief Executive Jensen Huang, have talked about the importance open source plays in democratizing access to AI models, tools and software to create that “rising tide,” and bringing AI to everyone. The announcement underscores Nvidia’s belief that open source is the foundation of AI innovation, driving global collaboration and lowering the barrier to entry for diverse developers.
Addressing the new challenges in enterprise AI
As large language models achieve reasoning accuracy suitable for enterprise applications, on an analyst prebrief Nvidia highlighted three critical challenges facing businesses today:
- The need for a system of models: There has not been and will not ever be a single model to rule them all and organizations need a choice of models to build performant AI applications. What’s required is a system of models that work together – different sizes, modalities and orchestrators to deliver a multi-model approach.
- Specialization for the “last mile”: AI applications often “hit a ceiling” and must be specialized for specific domains such as healthcare, financial services or cybersecurity. This requires training models with large volumes of proprietary and expert-encoded knowledge.
- The cost of “long thinking”: More intelligent answers require extended reasoning, self-reflection and deeper deliberation — a process Nvidia calls “long thinking” or test-time compute. This significantly increases token usage and compute cost, demanding more token efficient architectures and inference strategies.
Nemotron-3: The most efficient open model family
Nvidia’s answer to the above challenges is the Nemotron-3 family, characterized by its focus on being open, accurate, and efficient. The new models use a hybrid Mamba-Transformer mixture-of-experts or MoE architecture. This design dramatically improves efficiency as it runs several times faster with reduce memory requirements.
The Nemotron-3 family will be rolled out in three sizes, catering to different compute needs and performance requirements:
- Nemotron-3 Nano (available now): A highly efficient and accurate model. Though it’s a 30 billion-parameter model, only 3 billion parameters are active at any time, allowing it to fit onto smaller form-factor GPUs, such as the L40S.
- Nemotron-3 Super (Q1 2026): Optimized to fit within two H100 GPUs, it will incorporate Latent MoE for even greater accuracy with the same compute footprint.
- Nemotron-3 Ultra (1H 2026): Designed to offer maximum performance and scale.
Improved performance and context length
Nemotron-3 offers leading accuracy within its class, as evidenced by independent benchmarks from testing firm Artificial Analysis. In one test, Nemotron-3 Nano was shown to be the most open and intelligent model in its tiny, small reasoning class.
Furthermore, the model’s competitive advantage comes from its focus on token efficiency and speed. On the call, Nvidia highlighted Nemotron-3 tokens-to-intelligence rate ratio, which is crucial as the demand for tokens from cooperating agents increases. A significant feature of this family is the 1 million-token context length. This massive context window allows the models to perform dense, long-range reasoning at lower cost, enabling them to process full code bases, long technical specifications and multiday conversations within a single pass.
Reinforcement learning gyms: The key to specialization
A core component of the Nemotron-3 release is the use of NeMo Gym environments and data sets for reinforcement learning, or RL. This provides the exact tools and infrastructure Nvidia used to train Nemotron-3. The company is the first to release open, state-of-the-art, full reinforcement learning environments, alongside the open models, libraries and data to help developers build more accurate and capable, specialized agents.
The RL framework allows developers to pick up the environment and start generating specialized training data in hours.
The process involves:
- Training a base model (starting from the NeMo framework).
- Practicing/simulating in “gym” environments to generate answers or follow instructions.
- Scoring/verifying the answers against a reward system (human or automated).
- Updating/retraining the model with the high-quality, verified data, systematically shifting it toward higher-graded answers.
This systematic loop enables models to get better at choosing actions that earn higher rewards, like a student improving their skills through repeated, guided practice. Nvidia released 12 Gym environments targeting high-impact tasks like competitive coding, math and practical calendar scheduling.
Nvidia’s expanded commitment to open source
The Nemotron release is backed by a substantial commitment across three areas:
Open libraries and research
Nvidia is releasing the actual code used to train Nemotron-3, ensuring full transparency. This includes the Nemotron-3 research paper detailing techniques like synthetic data generation and RL.
Nvidia researchers continue to push the boundaries of AI, with notable research including:
- Nemotron Cascade: A student model that outperformed its teacher (DeepSeek, a 500 billion- to 600 billion-parameter model) in coding, demonstrating that the scaling laws of AI continue to extend.
- RLP (Reinforcement Learning in Pretraining): A technique to train reasoning models to think for themselves earlier in the process.
High-quality data sets
Nvidia is shifting the data narrative from big data to smart and improved data curation and quality. To accomplish this, the company is releasing several new data sets:
- Pre-training data: More than 3 trillion new tokens of premium pre-training data, synthetically generated and filtered for “all signal, no noise” quality, using more than 1 million H100 hours of compute.
- Post-training data (Safe Instruction): A 13 million-sample data set using only permissively licensed model outputs, making it safe for enterprise use.
- RL datasets: 12 new reinforcement learning environment and a corpus of datasets covering 900,000 sample tasks and prompts in math, coding, games, reasoning, and tool use, making Nvidia one of the few open model providers releasing both the RL data and the environments.
- Nemotron-agent safety: This provides 10,800 labeled OpenTelemetry traces from realistic, multistep, tool-using agent workflows to help evaluate and mitigate safety and security risks in agentic systems.
Enterprise blueprints and ecosystem
Nvidia is providing reference blueprints to accelerate adoption, integrating Nemotron-3 models and acceleration libraries:
- IQ Deep Researcher: For building on-premises AI research assistants for multi-step investigations.
- Video search and summarization: Turning hours of footage into seconds of insight.
- Enterprise RAG: The most optimized, enterprise-ready retrieval-augment generation blueprint, accelerating every step of the retrieval pipeline.
The Nemotron ecosystem is broad, with day-zero support for Nemotron-3 on platforms such as Amazon Bedrock. Key partners such as CrowdStrike Holdings Inc. and ServiceNow Inc. are actively using Nemotron data and tools, with ServiceNow noting that 15% of the pretraining data for their Apriel 1.6 Thinker model came from an Nvidia Nemotron data set.
The industry is winding down the hype phase of AI and we should start to see more production use cases. The Nemotron 3 family is well-suited for this era as it provides a performant and efficient open-source foundation for the development of the next generation of Agentic AI, reinforcing Nvidia’s deep commitment to democratizing AI innovation.
Zoom Communications Inc. is a fascinating company in that it’s one of the few corporate technology brands that resonates with end users as well as information technology pros.
I’m aware of many instances where the IT organization was considering an alternate communications product but the demand from the user community was so strong that Zoom was purchased. Zoom’s ease of use made it the product of choice during the stay-at-home period of the pandemic and user loyalty grew from there.
Since then, Zoom has targeted much of its marketing at IT pros, but it’s going back to what has made it so unique with a new brand campaign, “Zoom Ahead.” Instead of targeting IT decision-makers, the company is putting its attention on the people who actually use the work platform. The concept for the campaign came out of customer research showing that everyday users still feel a strong connection to the platform, and Zoom sees that as something worth building on.
The first ad for the new campaign was developed with Colin Jost’s creative agency. “Saturday Night Live”‘s Bowen Yang anchors the humorous ad called “I Use Zoom!” and the ad itself feels like an SNL skit. Yang represents an IT-like figure asking people to download a complicated tool. But then the ad takes a turn, where Zoom frames the platform as something people choose because it’s simple and dependable, not just because IT picked it. It plays up how widely Zoom is used by office workers, business owners, and frontline staff.
“We’re not targeting IT buyers with this campaign. We’re actually looking to reach and engage the users,” Kimberly Storin, Zoom’s chief marketing office, said in a briefing with industry analysts. “These are the people that are making things happen every single day on Zoom. They are champions. They are change makers. It’s not about just speaking to them. We want to inspire them and empower them to speak up for the tools that help them work better and get more done.”
The ad also makes references to some of Zoom’s newer products, including AI Companion and its contact center tools. Those additions are intentional. Many still associate Zoom with meetings, particularly video. Yet the company wants this campaign to help broaden that view.
“This campaign is our reintroduction,” said Storin. “It’s a movement. What we’re trying to do is capture some iconic moments, tap into that cultural relevance. We gave it a comedic modern twist… like “Severance.” Ultimately, we feel it’s not only unexpected but also a little bit ridiculous, and it’s a reminder that Zoom is defined by the people who use it.”
The ad will debut on Dec. 31 during the College Football Playoffs and will appear again during the NFL Playoffs, the Golden Globes and the Super Bowl pre-show. It will continue to roll out across digital, social and out-of-home channels through 2026. Storin noted that more creative is planned for the spring, which shows this isn’t meant to be a short-term push.
Zoom sees the campaign as an opportunity to reset how people think about the company. It wants to bring the brand back into everyday conversation and highlight products many users don’t know about by using humor and references to pop culture. After all, users are the ones who historically have driven much of Zoom’s momentum.
It will be interesting to see where the company takes the campaign from here. While most of the communications vendors have stayed in their swim lanes, Zoom has ventured well outside it with home grown tools and acquisitions. To the surprise of many, me included, Zoom has added products such as e-mail and docs, two markets that industry watchers feel Microsoft’s stranglehold on is far too hard to break.
It has also added frontline worker capabilities with the acquisition of Workvivo as well as BrightHire, an AI-powered hiring platform and Bonsai, a small business management platform. This in addition to more traditional capabilities such as Zoom Phone and Zoom Contact Center.
With these moves, Zoom is attempting to disrupt not just communications but the way we work, and I believe this is the most misunderstood aspect of the company’s strategy. It didn’t build e-mail to be yet another e-mail client just like in didn’t build Docs to try to be a better word processor.
What Zoom wants is the data from e-mail, documents, hiring tools and the like, which can power Zoom AI Companion. In data sciences, there’s an axiom, “Good data leads to good insights,” and though that’s true, silos of data lead to fragmented insights and we have plenty of those. Zoom wants to be the hub of work, and the place users work in most of the day.
This is certainly an ambitious goal, but it’s nice to see a vendor try to achieve something big. I would look at Zoom Ahead as a starting point to get users to think about the product as the one they love for video, but long-term, one that can do so much more than they thought.
High-performance network provider Arista Networks Inc. today announced the next wave of innovations for its campus network solutions.
The new products include expansion of its Virtual ES with Path Aliasing, or VESPA, offering, which will make it easier for businesses to deploy large-scale mobility domains. The Santa Clara networking company also announced it is expanding its Autonomous Virtual Assistant, or AVA, its agentic artificial intelligence solution, to help organizations streamline AI operations use cases.
Arista is well-known in high-performance networking environments where mass scale is critical. It has been knocking on the enterprise campus door for some time, including wireless. This release presents an excellent opportunity for Arista to bring its strengths in reliability, operational simplicity and mass scaling to wireless domains, including outdoor domains.
The company’s rapid growth has been by delivering a single, consistent experience across the network. For its enterprise customers, this means AI and data center, cloud, campus, branch, wireless and wide-area network. The operational simplicity and scale have been achieved with EOS, its single operating system, unified data lake of streamed telemetry (NetDL) and AVA.
Arista looks to now bring its strengths to the scaling limits enterprises experience thanks to rapid growth in the number of clients and internet of things devices they deploy. VESPA brings campus networks the consistent, large-scale principles typically used in the data center by enabling customers to design massive Wi-Fi roaming domain networks that support more than a half-million clients and 30,000 access points.
“This also allows our customers to completely simplify the network design, because previously they had to worry about deploying a large campus,” Sriram Venkiteswaran, Arista’s senior director of product line management, said in a prebriefing. He added “They would worry about splitting the campus into multiple domains, each having to set up its own IP address, VLAN and sub-routing. So there’s a lot of design complexity involved in the traditional way. With this approach, having a single mobility domain, we’ve taken away all the complexity from designing the network.”
The second benefit of this solution, he explained, “goes back to us building a CNC [centralized network controller] across all layers of the network. Again, in the traditional controller world, when you have controller failures you typically have downtime of a minute or two, and that can be disastrous for some applications, especially in healthcare, where a doctor is on a call and then the controller fails and just drops the entire connection. It takes minutes to recover. And this is becoming more urgent, especially in native mission-critical environments such as manufacturing and healthcare. Customers want this seamless connectivity across their network. VESPA is designed to solve these two problems.”
One of Arista’s VESPA customers, Arizona State University, said the campus is transitioning to Arista’s controllerless Wi-Fi to “help shape and validate the development of Arista’s VESPA architecture — a standards-based approach designed to provide a seamless wireless roaming domain that improves connectivity across the university,” said Jorge De Cossio, senior director of digital infrastructure and enterprise technology for ASU.
This emphasis on campus mobility and agentic AI comes at a key time for Arista. Though many potential customers may think of the company as primarily a hyperscaler provider, that background plays well today, as the characteristics of campus and hyperscaler networks in the AI era are not significantly different. To serve both categories, a vendor needs to deliver reliable, always-on bandwidth and zero-trust operations, both of which Arista provides. As campus networks deploy more AI-driven solutions, its expertise should be appealing.
Focus on AIOps with AVA
On the pre-briefing, Jeff Raymond, Arista’s vice president of EOS software and services, told me that when the company talks to customers about what it can provide in the area of AIOps, some say they just “want an easy button,” while others say they barely trust anything but their command-line interface and question whether Arista is going to “automatically start self-driving my network.” Raymond said the company isn’t focusing on replacing jobs but rather using AVA’s AI capabilities to provide assistance to the network operator so that they can do their job better, focus on higher-order priorities, and get answers more quickly or prevent issues from happening.
Raymond said network teams are “typically a more cautious group” when it comes to deploying automation technologies such as AI. “Getting them to move to automation is still a little bit of a human change agent, and this is just one step.” AVA’s expanded capabilities include:
- Multi-domain event correlation across wired, wireless, data center, and security to pinpoint a single root cause;
- Agentic conversational and troubleshooting capabilities in Ask AVA for sophisticated, multi-turn dialogue that follows the user’s train of thought; and
- Continuous monitoring and automated root cause analysis for proactive issue identification.
Over the past year, I’ve noticed a marked change in the attitude regarding AI within the networking community. Coming into 2025, there was a tremendous amount of fear of AI taking one’s job. Now that AI has worked its way into our day to day lives, that opinion of AI has shifted from, “It’s going to take my job,” to “How did I ever do my job without it” What’s become clear is AI tools, such as AVA, aren’t the enemy, they’re engineers’ best friend because it lets them work faster and smarter.
Ruggedized platforms for industrial environments
Arista will also debut two new ruggedized platforms for deployment in industrial or outdoor environments across a variety of sectors. The platforms are a 20-port DIN Rail switch with an IP50 rating, and a 1RU 24-port switch with an IP30 rating. The IP ratings indicate the devices are suitable for use in industrial environments, since they can withstand extreme temperatures, vibrations and shocks.
The entry into the ruggedized area was a bit of a surprise to me because these products are typically lower-margin than traditional networking products and Arista is an extremely margin focused and the financial results reflect that. Raymond explained Arista isn’t moving into the ruggedized market as a new product category to lead with. Rather, this is for Arista’s manufacturing, warehouse and other customers where they buy other products from Arista but must go to a competitor for these switches. This rounds out the portfolio and lets the company extend the “end to end” Arista value proposition.
Arista says it expects the new software capabilities and switch platforms to be generally available in the first quarter of 2026.
Amazon Web Services Inc. Chief Executive Matt Garman’s keynote at AWS re:Invent was filled with product updates with vision sprinkled in to help customers understand why the innovation matters.
To no surprise, this year’s keynote had a strong focus on the explosion of artificial intelligence and agents. The presentation outlined AWS’ strategy for empowering customers and developers in this new era by focusing on foundational infrastructure, diverse model choice, deep data integration, agentic platforms and developer tooling.
Here are five central themes from Garman’s keynote:
The inflection point: From AI assistants to billions of agents
Garman kicked off the keynote talking about how we are entering the era of billions of agents all working together to change the way we work and live. He declared that the advent of AI agents represents a major inflection point in AI’s trajectory, moving it from a “technical wonder” to a source of material business value that will be as impactful as the internet or the cloud. As he worked through his keynote, Garman discussed:
- Agents automating tasks for business value: Agents can perform tasks and automate actions on behalf of users, scaling people’s productivity up by 10x in some cases, leading to significant business returns across industries such as healthcare, customer service and payroll.
- Bedrock AgentCore scales agentic agents: The launch of Amazon Bedrock AgentCore is critical to empowering customers to deploy and operate highly capable agents securely at enterprise scale. AgentCore was built to open and modular, supporting frameworks like LangChain and models from various providers, including OpenAI and Gemini.
- Policy and Evaluations for trust: To address the need for predictability and control in autonomous agents (“trust but verify”), AWS introduced two key capabilities for AgentCore:
- Policy in AgentCore: Provides real-time, deterministic controls over the specific actions agents can take with tools and data, using the Cedar open-source language to enforce boundaries.
- AgentCore Evaluations: A new service to continuously inspect the quality of agent behavior (for example, correctness, helpfulness, harmfulness) based on real-world actions, automating a previously complex, data-scientist-heavy task.
AI infrastructure at planetary scale
A foundational theme of the keynote was the absolute necessity of highly scalable, secure and performant infrastructure to power the next generation of AI and agents. Garman emphasized that delivering the best AI performance and cost efficiency requires end-to-end optimization across hardware and software, a feat that AWS is uniquely positioned to achieve.
- Industry leadership in scale: Garman highlighted how AWS has the largest and most broadly deployed cloud infrastructure globally, with 38 regions and 120 Availability Zones. The sheer scale is underscored by the addition of 3.8 gigawatts of data center capacity in the last 12 months, more than anyone else, and a private network that has grown 50%, to more than 9 million kilometers of cable.
- Purpose-built AI silicon: AWS continues to push the boundaries of price-performance with its custom-designed AI processors:
- AWS Trainium: Although Tranium was built for training, Garman noted that Trainium 2 is excellent for inferencing and powers the majority of inferencing in Amazon Bedrock.
- Trainium 3 Launch: The announcement of Trainium 3 UltraServers, featuring the very first three-nanometer AI chip in the AWS cloud, delivering 4.4 times more compute and five times more AI tokens per megawatt of power compared with its predecessor.
- Trainium 4 Sneak Peek: A look ahead at the next iteration of the silicon, promising six times the FP4 compute performance.
- Best-in-class GPU experience: Garman stressed that AWS is the best place to run Nvidia graphics processing units, highlighting the operational stability and reliability achieved through 15-plus years of collaboration and sweating the small details (such as debugging BIOS) to avoid node failures. The launch of the new P6e GB300 instances, powered by Nvidia’s latest GB300 NVL72 systems, further supports this commitment.
- AWS AI Factories: AI factories have been all the rage, but they have typically been deployed on-premises with a hefty price tag. AWS’ versions bring more efficient pricing with stringent compliance and sovereignty requirements, allowing customers to deploy dedicated AI infrastructure that operates like a private AWS region within their own data centers.
Empowering choice and innovation with Amazon Nova and Bedrock
The belief that there will never be one model to rule them all is a core philosophy driving AWS’ model strategy, which is executed through Amazon Bedrock, the platform for generative AI applications.
- Model diversity: Bedrock continues to rapidly expand its selection, nearly doubling the number of models offered, including open-weights models like Google’s Gemma, Mistral Large, and Mistral 3, alongside proprietary models.
- Introducing Nova 2 family: Garman announced the new generation of Amazon’s own foundation models, Nova 2, designed to deliver cost-optimized, low-latency models with frontier-level intelligence. Nova 2 Light is a fast and cost-effective reasoning model, excelling at instruction following, tool calling and code generation. Nova 2 Pro is a more intelligent reasoning model for highly complex workloads, shining in areas critical for agents. Nova 2 Sonic is a speech-to-speech model for real-time conversational AI.
- Nova 2 Omni: A reasoning model that supports text, image, video and audio input and supports text and image generation output, addressing the need to understand multiple modalities simultaneously for real-world complexity.
Open training models with Nova Forge
Garman stressed that for AI to deliver value, it must be able to deeply understand a company’s unique data and intellectual property. To help customers with this he introduced open training models with Amazon Nova Forge.
- The data differentiator: The ability of models to understand a company’s data is what differentiates a business. Traditional methods like RAG and fine-tuning models on new domain data often hit limits, as models can “forget” core reasoning when customized post-training.
- Amazon Nova Forge: This new service gives customers access to a variety of Nova training checkpoints. Customers can blend their own proprietary data with an Amazon-curated data set at every stage of the model training.
- Creating Novellas: The result of this process is a “Novella” — a proprietary model that deeply understands the customer’s domain information without losing the foundational capabilities (such as reasoning) from the original training, enabling highly specific and intelligent guidance.
Reinventing how builders work
Finally, the keynote framed AI as a force multiplier for developers and enterprise teams, not just for end user experiences. Innovations include:
- Amazon Q: This is a consumer AI experience for the enterprise that securely brings together all structured and unstructured company data (business intelligence data, databases, apps such as Microsoft 365 and more) to empower employees with research capabilities, BI insights and “quick flows” to automate repetitive tasks.
- Amazon Connect: AWS continues to lead in the contact center space with Amazon Connect, now a billion-dollar annualized business, pioneering AI-powered self-service and AI-driven recommendations for human agents. Amazon Connect was once a dark horse in an industry filled with legacy vendors, but that’s not the case anymore.
- AWS Transform: To free up developers to innovate, AWS Transform is an agentic tool focused on modernization. The new AWS Transform Custom allows customers to create custom code transformation agents to modernize any code, API, framework or language, even proprietary ones (for example, converting VBA to Python, Angular to React).
- Kiro: the agentic development environment: The primary developer agent is Kiro, an agentic development environment for structured AI coding. It has already been overwhelmingly adopted and standardized across Amazon for internal use.
Final thoughts
If one considers AWS a bellwether for AI, then Garman’s keynote can be considered a declaration that the cloud is now an AI-native platform, built from the ground up to empower a new era of autonomous invention. The core strategy is based on vertical integration, ensuring that foundational infrastructure can reliably scale the coming wave of AI agents.
The central theme for AWS in 2026 will be the mass enterprise adoption of autonomous AI agents, driven by the new capabilities of Bedrock AgentCore and the transformative efficiency gains promised by Kiro and Nova Forge’s custom model creation.
Given Hewlett Packard Enterprise Co.‘s Juniper acquisition has now had a bit of time to percolate, I was expecting to see at HPE’s European version of its user event Discover this week in Barcelona how it’s using the combined assets to reshape itself for the artificial intelligence era.
The event is the first real glimpse into how the Juniper Networks acquisition is taking shape, just five months after the deal closed. In a relatively short period of time, HPE is now merging the Aruba and Juniper platforms, with the combined portfolio rolling out soon to help enterprises prepare their infrastructure for AI.
Here’s a closer look at the key announcements HPE made at Discover Barcelona and how they fit into the company’s broader strategy:
Bringing Aruba and Mist under one AI-native platform
HPE’s immediate focus is on bringing together Aruba Central and Juniper Mist, creating a single AI-native platform for managing enterprise networks. Mist is known for its AI troubleshooting features, whereas Aruba Central provides visibility into the types of devices connecting to the network and how they behave. Those capabilities are now being cross-pollinated using microservices.
For example, Juniper’s Large Experience Model, which analyzes billions of data points from apps such as Zoom and Microsoft Teams, is being added to Aruba Central. Meanwhile, Aruba’s Agentic Mesh technology is coming to Mist, enhancing its ability to detect issues, pinpoint their root cause and take action. For customers, the shift means the two platforms will begin to feel more aligned. HPE described this approach as “build once, deploy twice,” with these shared capabilities rolling out in the first quarter of 2026.
HPE is also beginning to bring the underlying hardware together, starting with new access points that can run on either Aruba or Juniper. The first of these will be Wi-Fi 7 models that work across both platforms, which will make it easier to mix and match hardware without worrying about compatibility. For HPE, it’s a step toward giving customers a more consistent experience.
“Experience is what matters today — the experience of users, the experience of operators,” Rami Rahim, executive vice president and general manager of HPE Networking and former Juniper CEO, said during a prebriefing. “Now with agentic AI, the sky’s the limit. We’re getting into a realm of self-driving capabilities where the network can practically do everything on its own.”
This sets the stage for HPE’s push into agentic AI for IT operations. Unlike traditional AIOps tools, agentic AIOps can reason through network behavior and decide what to do in response. In practice, it means a network that can diagnose an issue, determine what’s causing it, and take steps to correct it on its own. As Rahim put it, the goal is to have self-driving networks that continuously improve the user experience without waiting for manual intervention.
The roadmap outlined by Rahim should alleviate much of the angst customers have had since the acquisition was announced. Both Aruba and Juniper have loyal customer bases and have been concerned that their preferred products may go away in favor of the other companies. In my conversations with HPE and Juniper executives coupled with Rahim’s comments on the analyst call highlight that the companies will eventually bring the products together but will do so in a way that’s no disruptive. Aruba and Juniper customers can continue to use the products like prefer and as they refresh, will arrive at the same point sometime in the future.
New hardware for AI-ready data centers
HPE is pushing deeper into the data center, specifically into the parts of the infrastructure that power AI systems. AI data centers have very different requirements from traditional ones, relying on massive, ultra-efficient Ethernet fabrics to move data between graphics processing units. An underperforming network leads to underutilized GPUs which wastes money and time. To address those unique requirements, HPE introduced two new pieces of hardware.
The first, MX301, is a compact 1.6 terabits-per-second multiservice edge router designed for AI inference moving closer to where data is generated. AI models are being deployed at factory floors, hospitals, retail sites and remote locations. So, organizations need a more efficient way to connect those environments back into larger AI clusters.
“As inference moves closer to where data is, a smaller, more power- and space-efficient MX becomes extremely desirable,” said Rahim. “It packs all of the performance and all of the flexibility that our customers have come to love about the MX in a one-rack unit, power-optimized package, and makes it absolutely ideal as an on-ramp for the distributed inference cluster.”
The second product launch, QFX5250, is a switch built on Broadcom’s Tomahawk 6 silicon. It offers more than 100 Tbps of bandwidth and supports next-generation 1.6 Tbps interfaces, designed for high-speed networks that connect GPU racks inside AI data centers. QFX5250 is the “world’s highest-performance, 100% liquid-cooled ultra Ethernet transport (UET)-ready switch,” said Rahim, positioning it squarely against offerings from Nvidia and Arista.
The MX301 will be available in December, and the QFX5250 will follow in the first quarter of 2026.
Expanding into AI factories with Nvidia and AMD
HPE is adding Juniper MX and PTX routing platforms to Nvidia’s AI factory reference architecture. This gives HPE a way to provide the secure on-ramp for connecting users and devices into an AI factory. It also allows HPE to deliver the long-haul, multi-cloud connectivity for linking AI clusters across different locations. The addition also brings the optical capabilities required to connect private data centers across long distances or stitching together workloads that run across multiple clouds.
“These joint solutions will give our customers the assurance that they need to deploy our routing technology in conjunction with Nvidia’s cutting-edge products with full confidence,” said Rahim.
Additionally, HPE introduced an Ethernet-based scale-up switch for AMD’s new Helios rack, an alternative solution in a space that has traditionally relied on proprietary GPU interconnects like Nvidia’s NVLink. Helios is AMD’s new Open Compute Project ORv3 AI rack design, built with modular trays and liquid cooling for dense, power-constrained environments.
Tackling data readiness for AI workflows
Although networking dominated the announcements, HPE tackled a less-discussed but equally important challenge: data readiness. Enterprises often assume their bottleneck will be GPU capacity, but they often struggle with preparing their data for GPUs, according to Fidelma Russo, HPE’s CTO and executive vice president/general manager of Hybrid Cloud.
HPE launched the X10k Data Intelligence Node, which automatically enriches and structures data. It handles tasks like metadata tagging and vector generation, and it formats everything for retrieval augmented generation, a technique for enhancing the accuracy of generative AI models. The result is less dependence on external data-prep tools and better GPU utilization. HPE expects the X10k Data Intelligence Node to be available in January 2026.
“But storage itself isn’t enough,” said Russo. “AI pipelines don’t just need fast storage, they need fast recovery. So, we are bringing enterprise-grade performance and durability to secondary data, which has traditionally been an afterthought.”
What Russo referred to is HPE’s next update and major overhaul of the StoreOnce platform. HPE unveiled StoreOnce 7700, its first all-flash model designed for fast recovery, cyber forensics, and AI-based anomaly analysis. The second launch is StoreOnce 5720, a hybrid system with more than half a petabyte of usable capacity. With both slated to be available in January, HPE is removing the bottlenecks that slow down AI adoption before model training even begins.
Updates across AI cloud, virtualization and operations
It’s important to note several other key developments HPE shared at Discover Barcelona. HPE expanded its Private Cloud AI platform with support for Nvidia RTX 6000 GPUs and new Nvidia Inference Microservices models for customers who run AI systems in offline, highly regulated environments. The platform now supports GPU fractionalization, essentially dividing a single GPU so multiple users can run workloads at once.
Meanwhile, HPE continues to build out its Morpheus platform, which is being positioned for customers who are evaluating alternatives to their current virtualization setups. HPE is integrating Juniper’s networking and Apstra automation tools into Morpheus. This makes Morpheus easier to operate by automating the network settings that follow workloads as they move across environments.
On the operations side, HPE is updating OpsRamp and GreenLake Intelligence to give IT teams broader visibility across compute, storage, networking, and applications. The additions include new two-way integrations with Apstra, a natural-language Copilot for server troubleshooting, as well as support for the Model Context Protocol (MCP), which allows OpsRamp data to feed third-party AI agents. All of this ties into HPE’s vision of agentic AI for IT.
Taken together, the announcements underscore HPE’s view that building for the AI era requires a new, unified way of thinking. The Juniper acquisition gives HPE a broader portfolio to support that approach.
Final thoughts
HPE’s track record around acquisitions has been spotty, to say the least. The Aruba acquisition worked well because HPE let it run as an independent unit with the broader company. Though this minimized customer disruption, it didn’t create an “HPE” value proposition.
When the Juniper deal was announced, I was expecting it to go down its path and Aruba to continue to its journey. I was pleasantly surprised to see how much traction HPE has made bringing Juniper and Aruba together. Going into 2026, I’ll be watching for more points of integration, particularly between Mist and Central. So far, so good.
The partnership between Atlassian Corp., the enterprise software giant best known for products such as Jira and Confluence, and Formula 1 team Williams Racing is far more than a simple sponsorship with a logo on a helmet.
One of the aspects I like most about Formula 1 is that the value of all technical sponsorships counts towards the race teams operating cap. That means even if a vendor were to give the organization free hardware or software, the value of that is calculated and counts toward the annual spend. And that means any brand being associated with any of the race teams is being used by them.
Atlassian is the title sponsor for Williams, hence the name “Atlassian Williams Racing,” and the collaboration is an excellent proof point for how its “System of Work,” powered by AI, can transform one of the most data intensive and demanding sports on the planet. A big part of my research is the cross over between sports and tech, and there is no sport that collects as much data and acts on it as quickly as F1. Because of this, I was looking forward to attending the F1 Las Vegas Grand Prix in late November and spending time with Atlassian to get a deeper dive on the partnership.
Beyond the dev team: Atlassian’s System of Work
At the track, I met with Jamil Valliani, head of AI product for Atlassian. I mentioned that for many years, the Atlassian brand was synonymous with Jira, which has become an indispensable tool for software development and issue tracking. Valliani agreed that was the case, but that perception is now changing. While Jira remains foundational, Atlassian’s vision has broadened to encompass a complete System of Work — a philosophy and product suite designed to empower virtually any team with complex projects, goals, and knowledge bases.
This system is an integrated collection of tools, including:
- Jira: For tracking issues, tasks, and asset management (e.g., car parts from suppliers).
- Confluence: For maintaining knowledge bases (e.g., track conditions, race learnings).
- Loom: For recording and annotating video content, such as internal meetings and presentations.
The ultimate goal is to connect the entire organization — from HR to the pit crew — using standardized, accelerated processes. Valliani explained, the partnership with Williams is built on a shared commitment to unlocking “human potential through technology and teamwork.”
Rovo: The AI teammate driving the system
The cornerstone of the Atlassian System of Work, and the engine of innovation for Atlassian Williams Racing, is Rovo. Rovo is not a single product but an AI-powered intelligence layer that runs across all Atlassian’s offerings.
Valliani describes Rovo as a multifaceted capability that serves as an “AI teammate” to the workforce:
World-class enterprise search: Rovo can query across the vast, disparate knowledge locked within an organization’s Jira, Confluence and Loom content.
AI Rovo Chat: A conversational interface that allows users to ask questions and receive intelligent answers based on their internal company knowledge.
Rovo Studio (agent building): This is perhaps the most transformative feature. Studio allows nontechnical users to build custom agents and automations simply using natural language, providing an “extra pair of hands” for repetitive or complex tasks.
The adoption of Rovo across Atlassian’s paid customers has been very strong. Valliani cited more than 3.5 million monthly users. The growth rates — over 100 times growth in Rovo search and 50 times growth in Rovo chat — underscore the immediate value teams are finding in accelerating their work. Also, the studio feature has seen more than 2 million automations and workflows accelerated, demonstrating that the future of work is not just about using AI for personal tasks (like writing an email), but for accelerating teamwork.
Williams Racing: A Formula 1 use case
The highly regulated and competitive environment of Formula 1 makes it the perfect testing ground for a System of Work. As mentioned above, because of the sport’s cost cap rules, every piece of technology must deliver verifiable, competitive value. The Atlassian Williams Racing F1 team is utilizing the System of Work and Rovo across their entire organization for several uses cases, including:
Accelerating design and interpretation
The most compelling example shared by Valliani involves the team’s wind tunnel testing:
- The challenge: Wind tunnel tests produce massive amounts of raw data. Previously, only a few highly specialized engineers possessed the knowledge to interpret this data and translate it into actionable design changes (such as redesigning the car’s fin or wing). This created a bottleneck, slowing down the car’s iteration cycle.
- The Rovo solution: Williams trained a Rovo Agent to consume and interpret the raw wind tunnel data. This agent now gives tailored feedback to the right teams, ensuring they receive instantly actionable insights relevant to their domain.
- The result: The ability to interpret complex data and disseminate the insights to global teams rapidly has helped Williams improve their on-track performance, saving the crucial milliseconds that differentiate success from failure in F1.
Extending the lifespan of knowledge
Williams’ widespread adoption of Loom, especially for recording all team meetings, demonstrates how Rovo turns unstructured data into organizational memory:
- The challenge: Meetings often generate vital actions and insights, but this information is lost soon after the meeting concludes. This is compounded when employees miss a meeting or forget a specific action item.
- The Rovo solution: Loom AI takes the video recording and does much more than generate a transcript. It helps annotate the video, identifying actors and decided actions. A user can then use Rovo to query the knowledge — asking, for instance, “What action item did the driver have on the setup for the next race?” — and Rovo instantly recalls the information, finding the exact moment in the video.
- The result: The system extends the lifespan and accessibility of meeting knowledge, ensuring that insights from drivers or engineers are quickly found and actioned, even by people who weren’t present at the time.
A blueprint for organizational AI transformation
Atlassian and Williams Racing offer a critical lesson for any company looking to realize value from AI. As Valliani noted, many organizations are “stuck” because they focus only on using AI for individual acceleration. The key to true organizational transformation lies in teamwork acceleration.
Valliani’s advice for successful AI adoption, proven by the Williams partnership, is twofold:
- Top-down leadership: Leaders must embrace the AI tools themselves, sharing their successes and failures. This makes it “OK” for employees to experiment and learn without fear, establishing a culture of innovation.
- Trailblazer teams: Identify the most open-minded or “pain-felt” teams — those with a clear, pressing need for improvement. Give them the AI tools, put a spotlight on their wins, and let them become bottom-up evangelists.
The Atlassian Williams partnership is a tangible demonstration that when AI is integrated not just as a feature, but as the foundational layer of a connected, cross-organizational System of Work, it delivers measurable results. In the high-stakes world of Formula 1, this means better cars, faster processes and climbing the championship ranks. For the rest of the enterprise world, it means a proven blueprint for competing effectively in the new AI era.
The ability to drive organizational transformation has had tangible results for Williams. Atlassian became the title sponsor in 2025, and Williams Racing has seen a major shift in results. From 2018 to 2024, Williams accumulated a total of 84 points. With the 10 points earned in Las Vegas, the team now has 111 points in 2025 alone. When it comes to any organization, F1 included, teamwork matters but that requires having the right tools in place to see across every member.
I’ve seen many digital transformation efforts, but one recent example was unique: At the recent Cisco Systems Inc. Partner Summit event, I sat down with Grady Nichols, senior director of information technology operations at Mercy Ships, and Rob Kim, chief technology officer at Presidio Inc., and discussed how the two organizations are partnering with each other and Cisco to transform the way healthcare is delivered in underserved areas.
Modernizing IT on a ship presents unique challenges
Mercy Ships has embarked on a journey to modernize healthcare with most of it happening on the water. The humanitarian organization operates two large hospital ships serving low-income communities in Africa: Global Mercy in Sierra Leone and Africa Mercy in Madagascar. These ships are like floating cities with more than 800 people on board, including medical staff, families and volunteers.
Given the scale of these vessels, the technology must be able to support the complex environment of a hospital, cruise liner, food service provider and logistical hub. The ships dock in areas where internet access is limited, latency is high and outages are common. On top of that, the metal walls make wireless coverage even more problematic. None of this resembles a typical hospital campus.
Partnering on delivering technology to meet business challenges
Cisco and its partner Presidio, a global IT solutions provider, were brought in to help upgrade the infrastructure, so the ships can provide uninterrupted medical services in remote locations. The team designed two onboard data centers for redundancy and spent time physically walking the ship to see how Wi-Fi signals behaved from room to room. There’s no standard layout or predictable floor plan, so most of the network design had to be done by observation and testing.
“We need to modernize around our unique situation,” Nichols told me. “We’re not in an environment where bandwidth is plentiful. Because of this, we must bring new technology onto the ships and make it fit our needs as best we can — for our hospital staff, our surgeons and everyone else. Also, we need the ability to operate autonomously because of the latency and other limitations we have in these countries.”
The network runs almost entirely on Cisco equipment. Each vessel location has a separate on-premises Cisco call-management voice over internet protocol system and multiple Cisco-powered intermediate distribution frames or IDFs. The ships use many Cisco switches and switch stacks, in addition to roughly 1,600 Cisco phones. Nearly everything ties back into this Cisco infrastructure, including Wi-Fi, cameras, operating room equipment and telepresence systems.
Speed of innovation remains a challenge for Mercy Ships
An ongoing challenge for Mercy Ships is keeping up with technology while building new ships. Global Mercy took nearly seven years to complete, partly because of the pandemic. The next ship, Africa Mercy II, isn’t expected to enter service until 2028. With such long timelines, hardware may already be outdated by the time the ship launches. Therefore, it requires a different level of planning to deal with all the complexity.
“If you think about the places that they’re docking these ships, the quality of power that you have, and the environment around that when you’re building out data centers,” Kim said. “Where you’re building it out is less of a factor in terms of constraint than you have with this essentially floating Faraday cage. So, a lot of consideration must go into not only how we architect but also potentially provide resiliency.”
Artificial intelligence is starting to gradually make its way into operations. At the moment, it’s mostly used for day-to-day clinical work. Some of the surgical tools and imaging systems include deterministic AI features, such as image classification, which helps doctors review patient scans faster. Mercy Ships is also looking at newer AI capabilities, such as generative models, but these are still in the early planning stage and will depend on the infrastructure that’s being built now.
Resilient and performant connectivity is critical to deliver healthcare
Connectivity is one of the biggest limitations. Cloud-based AI isn’t reliable enough to use for patient care in remote locations. To get around that, the teams are working toward localizing more of the compute on the ship by adding graphics processing units. But they must be powered and cooled properly inside a steel vessel. Presidio and Cisco have already factored these requirements into the modernization plan, since running workloads locally reduces the amount of data that must move over slow satellite links.
Telemedicine plays a big role as well. The ships use Webex and other telepresence tools for remote consultations, sometimes with multiple doctors reviewing a case together. These sessions require stability, particularly in ports where outside connectivity is unpredictable. The same goes for training. Mercy Ships uses simulation and other digital tools to train local medical teams so they can continue providing care after the ship leaves.
Going forward, implementing new tools such as Cisco IQ would allow Mercy Ships to optimize limited bandwidth from shore. Cisco announced the digital platform at its recent Partner Summit. Cisco IQ provides visibility into an organization’s entire asset inventory, including device health, software versions, and lifecycle timelines. It’s enabled by agentic AI or a collection of specialized AI agents that analyze, diagnose and resolve problems.
“In the future, we think Cisco IQ is going to have a big play in what we provide, not only in terms of managing some of the entitlements from a cost efficiency perspective, but then also being able to pull in telemetry data and seeing if there’s even more ways, we can optimize the limited bandwidth, given the remote nature of the boats,” said Kim.
The goal right now is getting the infrastructure ready, especially with a new ship under construction. Most of the work being done today — building out local compute, strengthening the network, and reducing reliance on external connectivity — will eventually make those AI capabilities possible. Mercy Ships is focused on running the hospital and giving volunteers the support they need to do their work. Everything else will follow.
Final thoughts: Purpose and technology meet
Cisco is one of the most active companies, not just in technology but all organizations in giving back through purpose-led initiatives. The Cisco powered Medibus and floating classrooms in the Amazon are a couple of examples. This is not something Cisco does alone, as it collaborates extensively with like-minded partners.
Mercy Ships is a great example of a technology vendor and partner coming together to solve a problem that is considered a difficult to solve challenge. The work Mercy does is of critical importance in Africa as it brings world class healthcare to areas that would not have it otherwise but the small IT team at Mercy certainly would not have the know-how to deploy the technology and keep it up to date. Cisco and Presido have laid out a blueprint that I hope others follow to bring the benefits of modern technology to underserved areas.
As Veeam Software Group GmbH recently held its annual industry analyst event in San Antonio, it had a new direction to talk about following its announcement of its intent to acquire Securiti Inc. for a little over $1.7 billion.
Veeam is best known as a data protection company, but that changed when with the intent to acquire Securiti. This changes the profile of Veeam from being security-adjacent to an actual security vendor and, one could argue, it positions it to be the industry’s first artificial intelligence resiliency company.
This is important positioning as Veeam looks to go public. Looking at the competitive landscape, Commvault Systems Inc., which is considered more of a legacy backup and recovery company, trades at a market cap of $5.2 billion with a trailing-12-month revenue of just over a billion dollars. Conversely, Rubrik Inc. is thought of as a data security product, and it has a market cap of $14 billion with almost identical revenue. Veeam’s ability to tie the combination of its core product to Securiti to create a company that can not only protect AI workloads and data but then recover if breached, would be unique and likely fetch a trading multiple north of Rubrik.
The strategic rationale: Resilience for the AI era
For decades, backup and recovery systems focused on protection: ensuring organizations could recover after cyberattacks, outages and, more recently, ransomware, which has been a strong catalyst for Veeam adoption over the past few years. But in today’s digital landscape, where AI and compliance demands make enterprise data more dynamic, distributed and business-critical, a new challenge has emerged — not just recovering data, but ensuring it’s continuously secure, trustworthy and compliant across hybrid, multicloud and public cloud environments.
The combination of Veeam’s expertise in backup, recovery and ransomware remediation with Securiti’s deep capabilities — in data discovery, classification, lineage, data security posture management, or DSPM, and privacy automation — now offers continuous visibility and governance of both primary and secondary data. A good way to think about it is that Veeam has created a “data command center” that unifies resilience, security, compliance and business enablement, all tightly coupled with AI-native features.
Key differentiators: The first AI resiliency platform
Once the deal closes and the two companies come together, Veeam should have some unique differentiators, which include the following:
1. Unifying governance, security and recovery
Unlike most backup providers, Veeam can now provide end-to-end intelligence and automation from data creation to archival, governing how data is used for AI, automatically classifying sensitive assets, enforcing privacy policies and ensuring only trusted, compliant datasets feed AI models. The addition of DSPM allows Veeam’s platform to shift “left” of the breach, enabling security teams to articulate, enforce and audit policy before and after incidents.
2. AI-driven data management and trust
At the event we got a good look at Securiti’s Data Command Graph and agentic AI. Veeam can leverage this to create a real-time, unified layer across hybrid environments, helping organizations “trust” their AI pipelines by verifying the integrity, provenance and compliance status of their data flows. For enterprises regulating the use of large language models or deploying AI at scale, this capability should be a boon: Resilience is no longer just about recoverability, but also about the quality and legitimacy of data used for business-critical insights.
3. Continuous, automated compliance and privacy
Legacy backup companies focus on restoring data post-incident. Veeam now tracks data lineage, retention and policy adherence constantly enabling proactive compliance with regulations such as GDPR, CCPA and emerging AI-specific mandates. This capability greatly reduces the manual burden on information technology and risk teams and addresses one of the most urgent problems in AI adoption. Given the regulatory landscape around AI is likely to continually change, being able to automate this will be big for customers.
4. Single command center across all data estates
Veeam’s platform vision provides a consolidated view and governance interface for primary, secondary and software-as-a-service data worldwide. This goes beyond traditional solutions that typically stop at protecting backup copies. Enterprises can now map, secure and manage permissions on their entire data estate in one place — a critical requirement in the era of distributed cloud, edge computing and shadow IT.
This convergence of backup, security, privacy and AI into a single platform directly addresses emerging enterprise needs. It enables safer, more robust AI deployments, where organizations need assurance not only that their data can be recovered from ransomware, but that it is also clean, compliant and fit for decision-making and automation.
One of the interesting aspects is that this approach creates direct C-level visibility and relevance for Veeam, which should also help its valuation as it looks to IPO. Traditionally siloed IT and business teams can collaborate on a shared “source of truth” for data: auditing AI pipelines, enforcing sovereignty and demonstrating regulatory compliance without compromising speed or agility.
Final thoughts
Veeam’s acquisition of Securiti AI isn’t a consolidation play or even to move into an adjacency. It’s an aggressive move by Chief Executive Anand Eswaran (pictured) to redefine the industry his company leads in the AI era. Market leaders rarely lead transitions, which is why technology shifts typically lead to new winners but, being private gives Veeam to transform and then IPO. The company is uniquely positioned to lead in a world where resilient operations require more than restoring data — they demand holistic, automated governance of the data, policies and AI models that power modern business.
The cloud and artificial intelligence have the power to change the world of sports, and few technology providers have done a better job of executing on this than Amazon Web Services Inc.
The company has partnerships with the NFL, Bundesliga, PGA TOUR, F1 and many more. With each organization, AWS works closely with them to deliver several fan, league and team facing innovations.
This week, AWS added another organization to its roster when it announced the DP World Tour, formerly the European Tour, is teaming with it to change how fans experience and watch professional golf. As the official cloud provider, AWS will supply the technology behind the DP World Tour’s media production, tournament operations and sustainability platforms.
The tour is tapping into AWS’ AI and streaming tools to make golf more connected and easier to follow. One of these tools is AWS Media Services that will stream live and on-demand video, giving fans fast-round compilations of individual players. With generative AI as the overarching technology, fans will have a richer viewing experience across TV, social media and the web. For example, they will get real-time data insights, instant shot analysis and commentary in multiple languages.
The DP World Tour is highly diverse with 114 players representing 41 countries. The 2025 Tour schedule features 42 tournaments in 25 countries. Because of this breadth of global talent, many fans will tune in to an event to follow players from the home country, which could be as little as a single competitor. Historically, if that player is not near the top of the leaderboard, there would be very little information provided. This changes with the infusion of AI.
The tour is rolling out a new AI-powered Media Asset Management system to organize its vast library of video and digital content. MAM automatically tags metadata, creating an intelligent, searchable archive that can instantly identify players, shots and important moments. Over time, this will facilitate more personalized digital experiences for fans, such as tailored content featuring favorite players.
AWS is also developing a second-generation version of the tour’s Virtual Twin, which is a digital replica of each golf course. It draws on more than a million data points to give fans a complete view of what’s happening on every hole. Fans will be able to follow the action, whether they’re watching a screen at the venue or using their phones.
Though much of the focus is on improving the fan experience, there is a lot happening behind the scenes. Amazon Bedrock and Quick Suite will power a new intelligence platform that streamlines how tournaments are managed. The tour’s operational teams will have one platform that pulls in live data from around the golf course, so they can make informed decisions about moving staff, restocking concessions and reducing queue lengths.
The DP World Tour and AWS are approaching this with sustainability in mind, using machine learning to track and reduce the environmental impact of every tournament. AWS will power Green Drive Live, a data platform that tracks energy use, emissions, waste, water and logistics. The machine learning component will help simulate different operational scenarios before each event. The tour will share these sustainability metrics with fans on a live dashboard across venue screens and in its mobile app.
These developments are part of the tour’s vision of a tournament-as-a-service model, which is an intelligent golf course that provides a consistent, data-rich experience for players, partners, media, and fans. Starting in 2026, the tour expects to have its applications and data integrated into AWS, bringing tournament-as-a-service from concept to practice at real events.
The Ryder Cup has served as a testing ground for the kind of AI-driven experiences the DP World Tour aims to provide. In a recent interview with ZK Research, Michael Cole, chief technology officer for the PGA European Tour and Ryder Cup Europe, described how the Ryder Cup used to be a logistical challenge with staff and systems operating manually. But AI has changed all that.
“We’ve been able to implement AI enablement to help drive efficiency and additional insights, which we wouldn’t be getting before,” said Cole. “One example would be through AI operations and using the technology to spot symptoms. That could be anything from intrusion detection to a change of temperature in our network operating systems. When that symptom appears, the technology is triggered, generating key insights.”
Regarding this partnership, Cole explained, “AWS’ technology can transform the relationship between fans and their favorite sports. Golf is a fascinating innovation platform for AWS because it has some unique challenges and opportunities. Fans can stand a matter of feet away from the world’s best players, but unlike other sports that have one field of play, a golf tournament has 18 fields of play spread across a vast site, which can make it challenging to follow how every player is performing. This partnership will give our fans real-time access to virtually every moment on the course, ensuring they never miss a beat wherever they are watching.”
He then added, “As Golf’s Global Tour, we also have a truly global fan base. This comes with obvious languages challenges and a great example of this partnership unlocking innovation will be the use of AWS’ AI services to deliver multilingual and instantaneous commentary translation for the first time.”
Another example is the fan experience. At the Ryder cup, Outcome IQ used AI to add real-time context around every shot. It takes into account how far the ball traveled off the tee, how far it is from the pin, and what the outcome is. The data is processed within seconds.
“This is virtually impossible to achieve through any kind of manual process or integration that could deliver the same insight and intelligence,” said Cole. “It’s a good example of how AI is benefiting fans by adding greater context to every shot that’s played.”
Elaine Chiasson, global golf principal at AWS, manages the relationship with the PGA Tour and now the DP World Tour and I wanted to get her take on the impact of the partnership. “At AWS, we thrive on collaborating with forward-thinking organizations that match our relentless pursuit of innovation,” she said. “By combining AWS’ AI capabilities — already demonstrated in golf through our work with the PGA Tour — with the DP World Tour’s rich data environment, we’re creating an innovation engine that will transform how golf is experienced. What excites me the most about this partnership is that it will impact virtually every aspect of golf, from enhancing the fan experience, to supercharging tournament operations with agentic AI to fast-tracking sustainability goals. The result will be an experience that brings fans closer to the game they love while setting new standards for how global sports operate.”
Many industry luminaries have stated something to the effect that in the AI era, data is the fuel that powers innovation. Sports leagues such as the DP World Tour have massive amounts of data than can be analyzed using AI to deliver insights to anyone involved with the game. Coaches can teach better, players have more data to make the right decisions, and fans have personalized content to enhance their viewing experience.
There is a lot of hype around 5G these days, and for good reason, as it promises faster data speeds, lower latency and wider coverage to support a wide range of wireless applications such as remote medicine and autonomous warehouses. With these advancements, I’ve noticed the chatter around 5G as a Wi-Fi replacement, begging the question: Do we still need Wi-Fi?
A common misconception about Wi-Fi is that it’s outdated and doesn’t meet the needs of today’s organizations. That’s far from the truth. Newer standards such as Wi-Fi 6E and Wi-Fi 7 are comparable to 5G when it comes to performance in high-density environments.
In fact, because of advancements such as multilink operations and wider channels, expectations on Wi-Fi 7 are high. During its recent financial analyst event, Extreme Networks Inc. Chief Executive Ed Meyercord cited Dell’Oro data that forecasts Wi-Fi 7 deployments to outpace what was seen with 6 and 6E — comparable performance to 5G, yet Wi-Fi is easier to deploy and cost less.
Clearly, there’s a major incentive to push 5G as the main technology for enterprise connectivity. This is particularly true in high-density environments where there is a large concentration of users and devices. This includes stadiums, hospitals, universities, hotels, factories, office buildings and much more. But is 5G really the more practical choice?
Reality check: 5G vs. Wi-Fi
5G stands out when it comes to mobility and wide-area coverage. It can connect devices outdoors while they’re on the move. But inside buildings, 5G falls short. One major issue is indoor coverage, as noted by the GSMA, which represents mobile network operators worldwide. Walls, ceilings and other barriers interfere with signal strength because 5G doesn’t work well without a clear line of sight. Though outdoor antennas can be mounted on rooftops or towers, indoor setups are trickier. They usually require a dense network of radios and strong backhaul connections to work properly.
The added complexity makes indoor 5G deployments costly, which is why Wi-Fi remains the best option in most high-density environments. Wi-Fi systems are specifically designed for indoor use. They’re also easier to scale and manage without major upgrades. Wi-Fi uses unlicensed spectrum, so there aren’t recurring usage fees for organizations. Enterprise information technology teams already possess the skills to deploy, secure and monitor Wi-Fi networks, which simplifies the adoption process.
Private 5G is seen as an alternative to Wi-Fi for indoor connectivity, but organizations must accept additional tradeoffs. Private 5G network deployment requires organizations to obtain radio spectrum licenses and install dedicated 5G antennas or radios. It also requires IT teams to learn new management skills because they often lack expertise in cellular systems. Even if a managed service provider handles the setup, private 5G changes how the network is built and operated.
That said, 5G is useful in outdoor or mobile scenarios, such as logistics hubs, fleet management or temporary worksites. Private 5G networks have started to appear in retail environments to support point-of-sale devices. Some organizations deploy private 5G as a backup to redirect traffic when primary networks experience outages or slow down during peak usage periods. In most cases, 5G and Wi-Fi operate as complementary, not competin, technologies.
Real-world example: NFL deems Wi-Fi a critical resource
Earlier this year at the NFL TAC meetings, I caught up with Aaron Amendolia (pictured), deputy chief information officer of the NFL, and we talked about the importance of the network and Wi-Fi for fan experience. “What’s key to us is that our fans have a unique premium experience and that starts with technology,” he said. “We need good connectivity into the stadium, great game presentation from the sound and boards and using the device to get extra information such as stats and highlights.”
I followed up to ask him about the role of Wi-Fi versus 5G and whether the former was still crucial. “Wi-Fi is still critical,” he said. “If you look at it through the lens of the venue operator, cellular and Wi-Fi are both needed as fans come in with a wide range of devices. More and more fans are coming internationally, and we need to be ready to service them in any way they want to connect. As a league, we want to offer the best options to connect.” He added, “Also, there are areas of the stadium that may not get great cellular coverage and as a league we need to ensure back of house, corner use cases and all areas of concern are covered, and Wi-Fi does an excellent job of that.”
A new concern for the league today is that connectivity needs to support what Amendolia described as transitions. “Historically fans used their device at their seats,” he said. “Now it’s from the parking lot to the stadium or from the seat to concession or even for pop-up experiences we create.”
One of the underappreciated aspects of Wi-Fi is turning network data into business insights. Extreme Networks has been the official Wi-Fi Analytics provider for the league for over a decade and in that time the use of Wi-Fi data has evolved dramatically.
“Initially we wanted to understand how we are performing against our own standards by measuring bandwidth and throughput to the device,” Amendolia said. “That’s evolved over time to tracking things at the application layer to understand what fans are doing at the game. We have integrated this into our stat system so we can see when a touchdown pass occurs what the spike in usage is for the social applications. This provides great insights as to how are fans want to share our content.”
Final thoughts: Wi-Fi is as important as ever
As long as we have competing technologies, we will have the debate over whether one is required, but that’s the wrong way to think about it. IT decision makers need to understand the strengths of both 5G and Wi-Fi and use both in a complementary way to deliver the best network experience. Though the NFL might be considered a niche use case, since not every business has tens of thousands of people walking into their venues on “game day,” most businesses have more in common with the NFL than not.
In sports we call visitors fans, but in healthcare they are referred to as patients, in retail they are customers, and at airports they are travelers. All these people coming into these various organizations are using devices to shop, check in, look up information and other activities. If they can’t do it at that place of business, they’ll go elsewhere
I recently flew two airlines I don’t usually fly with, and neither offered Wi-Fi. It’s not that it was broken, that can happen, but they just don’t have it across their fleet. I won’t fly those airlines again. Airlines vary differently in experiences with lounges, seat style, food options and more, but for many people the decision to build loyalty with a brand is often rooted in connectivity.
As long as budgets and user experiences matter, Wi-Fi matters, today and into the foreseeable future.
There is a growing duality of opposing forces that needs to be dealt with if customers are to have success with artificial intelligence.
My research shows that more than 90% of organizations believe the network to be more important to business operations than it was two years ago. At the same time, almost the same number believe it to be more complex. These opposing forces of complexity and importance needs to get solved if companies are to attain the return on investment they seek with AI.
This week at Cisco Partner Summit, the company’s annual reseller event, Cisco Systems Inc. unveiled a new digital platform that provides information technology teams with a tool built on the unification of the company’s data to monitor technology, run system check and fix issues before they escalate. Built with AI, Cisco IQ combines automation, analytics and Cisco’s own technical insights into a single dashboard.
The reinvention of Cisco Customer Experience, which is Cisco’s support and services organization, is something Liz Centoni, a Cisco executive vice president and chief customer experience officer, has been working on since she became leader of the group about 18 months ago. What’s interesting about Centoni is that she has a product background as opposed one of services, but that helped in transforming the team.
At Partner Summit I asked Centoni why having come from product was an advantage. “Cisco is a product company and CX is here to support the technology,” she said. “The goal of Cisco IQ is to fundamentally change the nature of supporting and servicing customers by proactively addressing problems before they emerge.” Like much of the industry, Cisco’s support model has traditionally focused on fixing problems after they occur. Though this reactive motion has been the norm, it keeps engineers in firefighting mode.
Because of its massive footprint, Cisco has a tremendous amount of infrastructure data – perhaps more than any other vendor. During her keynote, Centoni explained how agentic AI is used to change the service model.
“CX is the sweet spot for agentic because it gives us the opportunity to change the nature of how we interact with our customers,” she said. “We become trusted advisers, not just service requestors or case processors as our teams have complete context.”
Historically, she added, “we solved problems by throwing more people into the mix, but this is exactly what an agentic system was built for. It’s continuously learning, predicting and understanding the whole stack.” It’s important to note the “we” Centoni referred to was inclusive of the more than a half-million partners Cisco has, as many of them rely on Cisco CX as part of their services.
Cisco IQ combines several key capabilities. It allows IT teams to run on-demand assessments for security, configurations and compliance, but also emerging areas such as quantum readiness and regulatory checks. The assessments present potential risks or misconfigurations, along with clear guidance on how to address these issues. Beyond assessments, Cisco IQ provides visibility into an organization’s entire asset inventory. For example, it shows device health, software versions and lifecycle timelines.
All of this is enabled by agentic AI agents that analyze, diagnose and resolve problems. Cisco’s research found that 93% of its customers believe agentic AI will create more personalized, proactive and predictive experiences. That expectation aligns with Cisco’s own vision where every interaction feels tailored to the customer’s unique needs.
Cisco IQ is built on a series of purpose-built agents that work together to improve service. One looks at documents and creates a knowledge base, others diagnose devices, retrieves information, and handles remediation. These agents work together to provide solutions through the Cisco IQ interface. Cisco’s goal is to build hundreds of these agents that talk to one another and orchestrate the work for its customers.
Centoni shared an example of how Cisco IQ can read and interpret complex technical documents, of which Cisco has many, and turn that information into automated system checks. During a demo, Cisco IQ performed a security assessment that showed how many devices were affected and where the issues were. From there, IT teams could click to see more details, including AI-generated summaries that explained the problem in plain language. The same assessment could be repeated to confirm that all the issues were resolved. A process that once required people to read long documents and cross-check configurations was mostly automated.
Organizations have several flexible options when it comes to deploying Cisco IQ, which will roll out in the second half of FY2026. It can be deployed as a software as a service platform, hosted and maintained by Cisco. It can be installed on-premises inside a company’s own data center but still tethered to Cisco’s cloud. In highly secure environments, Cisco IQ can run offline (air-gapped), without external network connections.
Centoni noted that Cisco IQ is part of a broader effort to simplify and unify CX across all of Cisco’s service models. As part of the rollout, Cisco is consolidating its services into two offerings: Cisco Support with standard, enhanced and signature tiers, and Cisco Professional Services, available as either a subscription or onetime engagements.
During my discussion, I asked Centoni why this was announced at Partner Summit versus Cisco Live, which is targeted at users. She explained that partners are key to how Cisco plans to deliver Cisco IQ. Partners can support their customers no matter how their systems are set up and at every stage, from planning and deployment to ongoing management. The platform gives partners access to the same automation and intelligence tools Cisco uses internally.
In the next few quarters, Cisco will trial Cisco IQ with a select set of partners, and then roll it out broadly. An interesting part of the process and a test for Cisco IQ, is that the company is not asking its partners, or even its own teams, to go through any steps to get it up and running. Those steps will be dynamic, with a goal of meeting customers where they are and what their intent is. Cisco IQ uses generative AI and agentic AI to be able to provide the right instructions and the right information to customers and partners.
Centoni wrapped up her keynote by talking about the evolution of Cisco CX. “This is not just repackaging of what we already have,” she said. “We’re delivering real time, passive insights, comprehensive infrastructure assessments and proximity troubleshooting powered by AI, which enables to deliver what customers want resiliency, simplicity and faster time to value.”
Cisco IQ represents a new approach in how IT delivers value in an AI-driven era. By reducing day-to-day friction and giving organizations the tools to act sooner, they can spend more time focusing on innovation and resilience rather than firefighting.

