Announcements & General Discussion

 View Only

What We’re Bringing to Red Hat Summit 2026 — and Why It Matters

By Jeff Cheng posted an hour ago

  

If you’re heading to Red Hat Summit 2026 in Atlanta, come find us at Booth 133.

We organized the booth around the conversations customers are actually having with us — not around product silos. The presentation comes with great navigation, so booth staff can jump directly to the topic that matters most: AI, virtualization modernization, cyber resilience, or trusted hybrid cloud.

Think of it as four high-level stories — 100, 200, 300, and 400 — with sixteen solution areas underneath.

Here’s the preview.

100 — Sustainable + Responsible AI

We help customers turn distributed enterprise data into production AI faster and more responsibly.

That sounds simple, but it is where many AI programs get stuck. The hard part is usually not “Can we run a model?” It is: Where is the data? How many copies are we creating? Who governs it? Can the GPU estate actually get fed? And can we scale AI without creating another generation of silos?

That is why the first pillar starts with the Hitachi Data Lakehouse. The idea is straightforward: query data where it lives instead of copying everything everywhere. VSP One Block, File, and Object provide the data foundation. Zetaris enables federated query. HADB provides in-place SQL analytics. Pentaho helps with catalog, governance, and lifecycle optimization. Red Hat OpenShift provides the container platform for the reference architecture.

Less ETL. Less duplication.  Better governance. Your data pipeline deserves better than spaghetti.

 

We are also showing Hitachi iQ and iQ Studio — and yes, we are keeping those two names intentionally separate. Hitachi iQ is the AI-ready infrastructure and solutions portfolio. iQ Studio is the agent and application software experience: no-code agent building, blueprints, MCP connectors, RAG/vector/model orchestration integration, and on-premises options for data sovereignty.

If your question is, “How do I build an AI platform?” we can talk Hitachi iQ. If your question is, “How do I build and manage AI agents?” we can talk iQ Studio. If your answer is “both,” even better.

For distributed AI data, we are showing Hitachi iQ with Hammerspace and Red Hat OpenShift AI. This is about feeding AI from anywhere: global namespace, local access patterns, scalable GPU compute, parallel storage, and OpenShift AI for inference, fine-tuning, training, and agentic AI. The point is not to copy every dataset to every location. The point is to make distributed data usable for AI without making operations miserable.

 

And yes, we also have the unexpected conversation starter: Floating Data Centers. MOL, Hitachi, and Hitachi Systems signed an MOU to explore floating data centers converted from second-hand vessels. The deck frames this as an initiative for AI-driven data-center capacity pressure, with demand verification, specification review, and feasibility studies. When land, power, cooling, or deployment speed becomes the bottleneck, the idea gets very interesting very quickly.

200 — Modernized Virtualization

We help customers modernize virtualization without losing the enterprise data services they already trust.

The VMware migration conversation is very real. But for most customers, the decision is not simply “move VMs.” It is “move VMs without breaking storage operations, DR assumptions, migration timelines, performance expectations, or the team’s confidence.”

That is where Hitachi Vantara CSI comes in. Hitachi CSI brings enterprise storage data services to Red Hat OpenShift Container Platform and OpenShift Virtualization for both VMs and containers. HSPC handles provisioning and storage operations such as snapshots, cloning, expansion, live migration, and stretched PVs. HRPC / Replication and DR Operator supports replicated PVs for DR and migration use cases. HSPP provides monitoring with Prometheus and Grafana.

In plain English: OpenShift gets a better enterprise storage experience.

For migration, we are showing accelerated VM migration using the Storage-Offload Plugin for MTV. Internally, you may hear us say “MTV+,” but the official framing is storage offload for MTV. The concept is important: move migration data through storage operations instead of forcing everything through the host path. In the scenario shown in the deck, the storage-offload approach is illustrated as dramatically faster than host-based migration. We will use the exact numbers only in that slide context — because context matters.

We are also showing the broader VSP One + Red Hat OpenShift portfolio story: one data foundation for legacy, modern, hybrid, and AI workloads. Some customers want storage-only building blocks. Some want an integrated solution with compute, networking, storage, automation, and services. The booth conversation is designed to route either way.

And for forward-looking OpenShift architectures, we are previewing VSP One SDS Block on OpenShift: a compact, 3-node hyperconverged architecture that brings storage, compute, containers, VMs, and AI/ML into one OpenShift environment. This is a tech preview topic, so we will not make GA or support claims from the booth. But the direction is exciting: OpenShift as a practical platform for VMs, containers, and AI without proprietary hypervisor lock-in.

 

300 — Cyber Resilience

We help customers keep OpenShift workloads protected, recoverable, and continuously available.

Storage without resilience is just a liability. As more VMs, containers, databases, and AI workloads land on OpenShift, customers are asking the next obvious question: How do we protect it? How do we recover it? And how do we avoid turning DR into a manual fire drill?

For OpenShift VM and container DR, we are showing Hitachi Vantara CSI-based disaster recovery automation. The deck shows a hub cluster, two datacenter clusters, DR policies with YAML/Git, storage replication, application failover, and storage failover. Policies can be applied using namespace, application/workload, or PVC selection patterns, and the deck explicitly shows VirtualMachine resources as part of the protected workload story.

Translation: this is not just “replicate a volume and hope someone remembers the runbook.” It is policy-driven recovery thinking for OpenShift.

For workloads that need continuous availability rather than recovery after outage, we are showing stretched cluster patterns with OpenShift Virtualization and GAD-backed stretched PVs. The booth-level story is simple: stretched PVs look like regular PVs to applications, but use active-active storage underneath. The deck illustrates zero RTO and zero RPO in the shown architecture. As always, distance, latency, and supportability details need SME-level guidance, not hallway improvisation.

 

For backup and ransomware resilience, we are broadening the conversation beyond a single tool. The Master Deck now covers VSP One Object, HDPS/Commvault/Veeam ecosystem integration, Veeam Kasten for Kubernetes protection, and isolated recovery concepts with Veeam and Hitachi. That means we can talk about Kubernetes backup, VM backup, object-based retention, immutability, isolated recovery, and ransomware-ready restore thinking in one resilience pillar.

Why manage disconnected backup strategies if the business expects one recovery outcome?

And for the “what’s next?” conversation, we are showing SuperGAD as a technology-preview concept for very long-distance stretched cluster architectures. The deck uses the 600 km concept to make the distance tangible — think San Francisco to Las Vegas or Paris to Frankfurt — and ties the concept to IOWN APN and joint testing context. This is not a GA claim. It is a forward-looking resilience discussion for customers thinking beyond normal metro availability.

400 — Trusted Hybrid Cloud

We help customers operate consistently across on-prem, cloud, and edge with enterprise data services and automation.

Hybrid cloud only works if the operating model is real. Otherwise, “hybrid” becomes a nice diagram and a painful ticket queue.

For public cloud OpenShift, we are showing VSP One SDS Cloud with ROSA. The deck shows Terraform and Ansible automation from SDS Cloud infrastructure provisioning through ROSA, HSPC CSI installation, StorageClass configuration, PVC/PV binding, and PostgreSQL workload deployment. The story is not just “storage in cloud.” It is enterprise data services plus infrastructure-as-code for cloud OpenShift.

For operations, we are showing VSP 360 and Ansible Automation. VSP 360 is the unified control plane story: control, observe, and govern across VSP One Block, File, Object, and SDS.

Ansible turns storage into code, using Red Hat-certified automation content for VSP One Block and Object, and connecting storage workflows into broader hybrid cloud, ITSM, security, network, edge, and platform automation.

 

The booth question is simple: which storage operations still depend on tickets, spreadsheets, or disconnected runbooks?

We are also showing a dev-preview hybrid multi-cloud data mobility concept: VSP One Block on-premises to VSP One SDS Cloud using Universal Replicator. The use cases are practical: analytics offload, cloud DR, test/dev clones, and gradual migration. This is directional / dev-preview content, so we will not overcommit dates or support scope. But it is exactly the kind of pattern customers ask for when they want cloud value without disrupting production.

Finally, we have OpenShift Planning & Sizing — and this section has moved beyond a placeholder. The current Master Deck shows Span – OpenShift planner v0.8, built using Hitachi iQ Studio + Red Hat AI. It frames cluster design, workload sizing, licensing comparison, VMware-to-OCP migration math, and report export. This is a great conversation router for customers who are not ready for a product pitch yet because they are still trying to answer: What should we actually build?

Come See Us

Booth 133.

The presentation is prepared for real booth conversations. No linear pitch required. Click the topic that matters. Follow the customer’s problem. Bring up the proof slide. Route to the right demo, asset, or SME.

Whether you are trying to feed AI with distributed data, migrate VMs to OpenShift, protect OpenShift workloads, automate hybrid operations, or simply size the first architecture correctly — we will have the people, proof points, and demos ready.

See you in Atlanta. 🤝


#Blog
0 comments
3 views

Permalink