Co-author : Chit Thu Shine

      As the 21 century is moving forward, our daily lives are always in contact with smart devices. A lot of information is stored in smart devices and people are concerned about privacy not to leak out their data. Traditional AI model uses the centralized server for training data sending back all data to the server. This procedure can harm large datasets and the data privacy problems since all information has to be sent to the server. The data can be cut off when the server connection is down or the wifi or something is wrong.  To recover all the problems of data privacy and best low latency, Google has introduced the most secure and robust cloud infrastructures based federated learning in 2017 and it becomes the active topic among AI zones. 

      In this blog, you will see that research and theories on federated learning are more spotlighted. Later, we are planning to apply code implementation and testing with the recommendation, computer visions and natural language processing(NLP) projects. 

Federated Learning 

federated learning
Figure : Federated Learning process , figure resource from Faisal Zaman's medium post

      Federated learning is similar to distributed learning. However, in distributed machine learning, the data sources are divided into similar sizes. As in federated learning, the datasets are not in equal sizes.   

      Traditional Machine learning’s way is data on each user is sent to a server in a centralized way and privacy concerns are not secure. Federated Learning is effective for big amounts of data collections and each user’s data privacy is safe for training and accurate outputs. 

       The steps are simply going through in the following :

            1. Pretrained-model is stayed in the central server 

            2. Using the local dataset, a model is pulled from the server and trained on the smartphone. 

            3. The model and parameters from each client are sent back to the server and The server creates a new global model by averaging all local models. This is done for several iterations until a high-quality model is obtained.

Why is federated learning important now?

      All users want to get a fast response and people are lazy with the aid of AI assistance. The personal information is becoming important to everyone and people don’t want to send their data to the applications for not leaving it on the server. That’s Why, federated learning is becoming popular among the AI domain and all are interesting on this topic. 

Comparison between traditional machine learning and Federated learning

         The differences between traditional ways and federated learning are :

    1. Federated learning can retain persistent storage since the local model updates are processed in the client’s device. Traditional machine learning with a server doesn't support persistent storage when the power shuts down. 
    2. In Federated learning, the server doesn’t store any clients’ information and aggregate updated models. In traditional methods, all of the information has to be sent to the server and not secure. 
    3. Traditional machine learning can take too much time when a large amount of data is uploaded and accuracy can be low.


      Federated learning is useful when the clients’ data is more relevant on the local device rather than server. When the vast amount of data is required to process, these federated learning based applications are best suited. 

1. Firefox URL bar : When the user types in the search bar,it aims to improve part of the suggestions displayed in the Firefox URL bar, 

2. Google’s Gboard : When Gboard shows a suggested query, your phone locally stores information about the current context and whether you clicked the suggestion. Federated Learning processes that history on-device to suggest improvements to the next iteration of Gboard’s query suggestion model.

3. Modern IoT devices : Nowadays smart and wearable devices are always in contact with people and the in-time information needs to be reached out without delays. Federated learning can assist to improve the performance of results.

4. Financial data processing : Financial data are sensitive and future predictive on fintech. Risk Prediction , Fraud Detection and KYC to prevent malicious transactions.

5. Digital healthcare prediction : Federated learning models can help better diagnose rare diseases by collecting information from various locations (e.g. hospitals, electronic health record databases).

6. Autonomous Vehicles : Self-driven car and real-time prediction cases need federated learning models to save human life and to forecast real-time decisions. AI engineers are trying to implement a big project to apply in computer vision fields. 


- Low Latency : Output processing time to the client’s device is fast response 

- High network bandwidth : Less information is required to transmit into the cloud server

- Secure privacy : The clients’ private information doesn’t need to be sent to the cloud.


- Slow Connections : Due to the high dimensionality of model updates and limited communication bandwidth of smart devices, communication problems may occur.

- Complex Environment : It’s a little complex to run the code in the GPU environment. Federated learning model is only available on GPU devices.

- Difficult to detect malicious info : While multiple clients’ fake data is trained with the model, it is hard to detect the malicious data on each client. 



      Federated learning is supportive to the large amount of data we have to handle. This topic is just currently new and researchers started to analyze how to deploy in the real world scenarios. Since federated learning is based on decentralized server techniques, AI engineers believe that federated learning can overcome the challenges of traditional machine learning models and it becomes the most interesting title among the AI regions. Resources are little to reference currently and you can jump out the next-steps to build real-time projects at the same time. Our Nexidea team’s future plan is to turn this federated learning theoretic foundations into real-life applications in our already-built hybrid recommendation project and semantic similarity based NLP project. 

Khin Radanar Pyae Phyo

AI Research Engineer