Hewlett Packard Enterprise

R2 Semiconductor Files New Patent Infringement Lawsuit Against Intel In France

Retrieved on: 
Monday, April 8, 2024

The filing comes shortly before the April 16 start of trial on the same European patent against Intel in the U.K.’s High Court of Justice, Patents Court in London.

Key Points: 
  • The filing comes shortly before the April 16 start of trial on the same European patent against Intel in the U.K.’s High Court of Justice, Patents Court in London.
  • R2’s lawsuit in France is its latest action in defense of the groundbreaking patent covering integrated voltage regulation technology invented by R2 Founder and CEO David Fisher.
  • “The invention we are protecting in Germany, the U.K., and now in France, is protected by patent throughout all of Europe.
  • R2 is fortunate to have the means to enforce our rights to stop this egregious behavior by Intel,” said R2 CEO David Fisher.

NVIDIA Launches Generative AI Microservices for Developers to Create and Deploy Generative AI Copilots Across NVIDIA CUDA GPU Installed Base

Retrieved on: 
Monday, March 18, 2024

Built on top of the NVIDIA CUDA ® platform, the catalog of cloud-native microservices includes NVIDIA NIM ™ microservices for optimized inference on more than two dozen popular AI models from NVIDIA and its partner ecosystem.

Key Points: 
  • Built on top of the NVIDIA CUDA ® platform, the catalog of cloud-native microservices includes NVIDIA NIM ™ microservices for optimized inference on more than two dozen popular AI models from NVIDIA and its partner ecosystem.
  • Among the first to access the new NVIDIA generative AI microservices available in NVIDIA AI Enterprise 5.0 are leading application, data and cybersecurity platform providers including Adobe , Cadence , CrowdStrike , Getty Images, SAP , ServiceNow , and Shutterstock.
  • NVIDIA AI Enterprise microservices are coming to infrastructure software platforms including VMware Private AI Foundation with NVIDIA.
  • Enterprises can deploy production-grade NIM microservices with NVIDIA AI Enterprise 5.0 running on NVIDIA-Certified Systems and leading cloud platforms.

NVIDIA Healthcare Launches Generative AI Microservices to Advance Drug Discovery, MedTech and Digital Health

Retrieved on: 
Monday, March 18, 2024

The microservices, 25 of which launched today, can accelerate transformation for healthcare companies as generative AI introduces numerous opportunities for pharmaceutical companies, doctors and hospitals.

Key Points: 
  • The microservices, 25 of which launched today, can accelerate transformation for healthcare companies as generative AI introduces numerous opportunities for pharmaceutical companies, doctors and hospitals.
  • These include screening for trillions of drug compounds to advance medicine, gathering better patient data to aid early disease detection and implementing smarter digital assistants.
  • “By helping healthcare companies easily build and manage AI solutions, we’re enabling them to harness the full power and potential of generative AI.”
    The new suite of healthcare microservices includes NVIDIA NIM , which provides optimized inference for a growing collection of models across imaging, medtech, drug discovery and digital health.
  • Hippocratic AI is developing task-specific Generative AI Healthcare Agents, powered by the company’s safety-focused LLM for healthcare, connected to NVIDIA Avatar Cloud Engine microservices and will utilize NVIDIA NIM for low-latency inferencing and speech recognition.

NVIDIA Announces New Switches Optimized for Trillion-Parameter GPU Computing and AI Infrastructure

Retrieved on: 
Monday, March 18, 2024

The world’s first networking platforms capable of end-to-end 800Gb/s throughput, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum™-X800 Ethernet push the boundaries of networking performance for computing and AI workloads.

Key Points: 
  • The world’s first networking platforms capable of end-to-end 800Gb/s throughput, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum™-X800 Ethernet push the boundaries of networking performance for computing and AI workloads.
  • “NVIDIA Networking is central to the scalability of our AI supercomputing infrastructure,” said Gilad Shainer, senior vice president of Networking at NVIDIA.
  • “NVIDIA X800 switches are end-to-end networking platforms that enable us to achieve trillion-parameter-scale generative AI essential for new AI infrastructures.”
    Initial adopters of Quantum InfiniBand and Spectrum-X Ethernet include Microsoft Azure and Oracle Cloud Infrastructure.
  • Behind this transformation is the evolution of data centers into high-performance AI engines with increased demands for networking infrastructure,” said Nidhi Chappell, Vice President of AI Infrastructure at Microsoft Azure.

NVIDIA Blackwell Platform Arrives to Power a New Era of Computing

Retrieved on: 
Monday, March 18, 2024

The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.

Key Points: 
  • The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.
  • It combines 36 Grace Blackwell Superchips, which include 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink.
  • The Blackwell product portfolio is supported by NVIDIA AI Enterprise , the end-to-end operating system for production-grade AI.
  • To learn more about the NVIDIA Blackwell platform, watch the GTC keynote and register to attend sessions from NVIDIA and industry leaders at GTC, which runs through March 21.

CIO Celebrates Innovations in Business Technology with 2024 CIO 100 & Hall of Fame Awards

Retrieved on: 
Monday, March 18, 2024

Boston, March 18, 2024 (GLOBE NEWSWIRE) -- Foundry’s CIO – the executive-level IT media brand providing insight into business technology leadership – is pleased to recognize the 2024 CIO 100 award winners and Hall of Fame inductees.

Key Points: 
  • Boston, March 18, 2024 (GLOBE NEWSWIRE) -- Foundry’s CIO – the executive-level IT media brand providing insight into business technology leadership – is pleased to recognize the 2024 CIO 100 award winners and Hall of Fame inductees.
  • “The CIO 100 Symposium & Awards continues a tradition of over 25 years of the highest quality content for IT leaders and their teams.
  • In addition to celebrating 100 organizations during the dinner and awards ceremony, I am thrilled to feature many of these IT leaders as speakers,” stated Elizabeth Cutler, Content Director, CIO 100 Symposium & Awards.
  • 2024 CIO 100 Hall of Fame Inductees:
    Lookman Fazal, Chief Information & Digital Officer, NJ TRANSIT
    Shamim Mohammad, EVP, Chief Information & Technology Officer, CarMax
    2024 CIO 100 Award Winners:

New MLPerf Inference Benchmark Results Highlight The Rapid Growth of Generative AI Models

Retrieved on: 
Wednesday, March 27, 2024

The MLPerf Inference benchmark suite, which encompasses both data center and edge systems, is designed to measure how quickly hardware systems can run AI and ML models in a variety of deployment scenarios.

Key Points: 
  • The MLPerf Inference benchmark suite, which encompasses both data center and edge systems, is designed to measure how quickly hardware systems can run AI and ML models in a variety of deployment scenarios.
  • We are thrilled to collaborate with Meta to bring Llama 2 70B to the MLPerf Inference v4.0 benchmark suite.
  • "Generative AI use-cases are front and center in our v4.0 submission round,” said Mitchelle Rasquinha, co-chair of the MLPerf Inference working group.
  • “The v4.0 release of MLPerf Inference represents a full embrace of generative AI within the benchmark suite,” said Miro Hodak, MLPerf Inference co-chair.

Hewlett Packard Enterprise Leverages GenAI to Enhance AIOps Capabilities of HPE Aruba Networking Central Platform

Retrieved on: 
Tuesday, March 26, 2024

Hewlett Packard Enterprise (NYSE: HPE) today announced the expansion of its AIOps network management capabilities by integrating multiple generative AI (GenAI) Large Language Models (LLMs) within HPE Aruba Networking Central , HPE’s cloud-native network management solution, hosted on the HPE GreenLake Cloud Platform.

Key Points: 
  • Hewlett Packard Enterprise (NYSE: HPE) today announced the expansion of its AIOps network management capabilities by integrating multiple generative AI (GenAI) Large Language Models (LLMs) within HPE Aruba Networking Central , HPE’s cloud-native network management solution, hosted on the HPE GreenLake Cloud Platform.
  • In related news, HPE also announced Verizon Business is expanding its managed services portfolio to include HPE Aruba Networking Central.
  • The new GenAI LLM functionality will be incorporated into HPE Aruba Networking Central’s AI Search feature, complementing existing ML-based AI throughout HPE Networking Central to provide deeper insights, better analytics, and more proactive capabilities.
  • In addition to being a standalone SaaS offering, HPE Aruba Networking Central is also included as part of an HPE GreenLake for Networking (NaaS) subscription and is available through the HPE GreenLake platform.

Juniper Networks Announces Date of First Quarter Preliminary Financial Results

Retrieved on: 
Tuesday, March 19, 2024

Juniper Networks (NYSE: JNPR), a leader in secure, AI-Native networks, today announced it will release preliminary financial results for the quarter ending March 31, 2024 on Thursday, April 25, 2024 after the close of the market.

Key Points: 
  • Juniper Networks (NYSE: JNPR), a leader in secure, AI-Native networks, today announced it will release preliminary financial results for the quarter ending March 31, 2024 on Thursday, April 25, 2024 after the close of the market.
  • There will be no conference call on April 25, 2024 due to the proposed merger with Hewlett Packard Enterprise.
  • Note: Our Customer Solution revenue categories will include the following name changes:
    Historical revenue by Customer Solution does not change, this is only a name change.

Hewlett Packard Enterprise Debuts End-to-End AI-Native Portfolio for Generative AI

Retrieved on: 
Monday, March 18, 2024

Today at NVIDIA GTC, Hewlett Packard Enterprise (NYSE: HPE) announced updates to one of the industry’s most comprehensive AI-native portfolios to advance the operationalization of generative AI (GenAI), deep learning, and machine learning (ML) applications.

Key Points: 
  • Today at NVIDIA GTC, Hewlett Packard Enterprise (NYSE: HPE) announced updates to one of the industry’s most comprehensive AI-native portfolios to advance the operationalization of generative AI (GenAI), deep learning, and machine learning (ML) applications.
  • The solution is enhanced by HPE’s machine learning platform and analytics software, NVIDIA AI Enterprise 5.0 software with new NVIDIA NIM microservice for optimized inference of generative AI models, as well as NVIDIA NeMo Retriever and other data science and AI libraries.
  • For more information or to order it today, visit HPE’s enterprise computing solution for generative AI .
  • HPE’s AI software is available on both HPE’s supercomputing and enterprise computing solutions for generative AI to provide a consistent environment for customers to manage their GenAI workloads.