Intelligent Cloud Computing Architecture for Adaptive Application Delivery
Legal Citation
Summary of the Inventive Concept
A next-generation cloud computing architecture that leverages AI-driven traffic prediction, decentralized resource allocation, and hybrid routing optimization to deliver low-latency, high-quality application experiences in cloud computing environments.
Background and Problem Solved
The original patent addressed the limitations of traditional data center routing and forwarding methods in cloud computing environments, which often resulted in suboptimal application performance due to network congestion, latency, and packet loss. However, the original approach relied on static network tests and did not account for real-time traffic fluctuations or user preferences. The new inventive concept solves this problem by introducing AI-driven traffic prediction, decentralized resource allocation, and hybrid routing optimization to dynamically adjust routing paths and resource allocation, ensuring optimal application performance and user experience.
Detailed Description of the Inventive Concept
The new inventive concept comprises a neural network-based traffic prediction module, a distributed data center resource allocation module, and a hybrid routing protocol optimization module. These components work in tandem to dynamically adjust routing paths and resource allocation based on real-time traffic predictions and application performance metrics. The system receives user requests for application sessions and generates dynamic application profiles based on real-time network performance metrics and user preferences. The decentralized AI-based optimization framework then allocates data center resources and routing paths to minimize latency and maximize application quality of service. Additionally, the system utilizes a blockchain-based decentralized optimization framework to ensure low-latency and high-quality virtual reality (VR) experiences.
Novelty and Inventive Step
The new claims introduce the use of AI-driven traffic prediction, decentralized resource allocation, and hybrid routing optimization, which are not present in the original patent. These components enable real-time adaptation to traffic fluctuations and user preferences, resulting in significantly improved application performance and user experience. The combination of these components and the decentralized AI-based optimization framework constitutes a novel and non-obvious inventive step beyond the original patent.
Alternative Embodiments and Variations
Alternative embodiments of the inventive concept could include the use of edge computing, fog computing, or other distributed computing architectures to further reduce latency and improve application performance. Variations of the system could also be implemented using different AI algorithms, such as reinforcement learning or graph neural networks, to optimize traffic prediction and resource allocation.
Potential Commercial Applications and Market
The inventive concept has significant commercial potential in the cloud gaming, cloud virtual reality (VR), and remote workstation markets, where low-latency and high-quality application experiences are critical. The system could also be applied to other cloud-based applications, such as video streaming, online education, and healthcare, to improve user experience and reduce latency.
CPC Classifications
| Section | Class | Group |
|---|---|---|
| A | A63 | A63F13/358 |
| A | A63 | A63F13/352 |
| H | H04 | H04L47/18 |
| H | H04 | H04L47/2433 |
| H | H04 | H04L67/14 |
Original Patent Information
| Patent Number | US 11,857,872 |
|---|---|
| Title | Content adaptive data center routing and forwarding in cloud computing environments |
| Assignee(s) | NVIDIA CORPORATION |