AI Techniques for Decentralized Data Processing: Advanced Methods for Enhancing Scalability, Efficiency, and Real-Time Decision-Making in Distributed Architectures

Authors

  • Daniela Torres
  • Julián Castillo

Keywords:

Python, TensorFlow, PyTorch, Kubernetes, Apache Kafka, Hadoop, Spark

Abstract

This paper explores advanced AI techniques tailored for decentralized data processing, addressing the limitations and challenges of traditional centralized systems. The study emphasizes the evolution of AI from symbolic reasoning to deep learning, highlighting the critical role of data processing in modern applications such as healthcare, finance, and autonomous systems. Decentralized data processing, leveraging distributed networks and edge computing, offers solutions to scalability, privacy, and latency issues inherent in centralized architectures. Key methods investigated include federated learning, which enhances privacy by training models locally on devices without sharing raw data, and edge AI, which deploys lightweight models on edge devices for real-time processing. The integration of blockchain technology further secures data sharing across decentralized networks. Empirical evaluations demonstrate the efficacy of these techniques in enhancing data privacy, reducing latency, and improving the resilience of AI systems. The study concludes that decentralized AI holds significant potential for various applications, such as smart cities, IoT, and personalized healthcare, by providing robust, efficient, and scalable data processing solutions.

Author Biography

Daniela Torres

 

 

Downloads

Published

2024-02-12

How to Cite

Daniela Torres, & Julián Castillo. (2024). AI Techniques for Decentralized Data Processing: Advanced Methods for Enhancing Scalability, Efficiency, and Real-Time Decision-Making in Distributed Architectures. Journal of Artificial Intelligence and Machine Learning in Management, 8(2), 22–43. Retrieved from https://journals.sagescience.org/index.php/jamm/article/view/174