Self-Optimising networks enable AI and machine learning data transfers


By Tracy Rosensteel, Global Head of Capital Markets and Emerging Technologies at Telstra


New technologies like artificial intelligence (AI) and machine learning (ML) offer businesses unprecedented opportunities. In the finance sector, for instance, AI and ML can help firms improve everything from their retail banking experience to their trading algorithms, business analytics and fraud detection protocols.

But there’s a catch: the promise of AI and ML technologies may not be realised unless some key enabling technologies are put in place at the outset. First, firms need a network with fast, scalable, secure and rapidly deployable connectivity. Second, they need a way to coordinate big data as it flows between public and private clouds.

This need for smart connectivity stems from the special demands of artificial intelligence. Today’s ultrafast graphics processor units (GPUs) enable deep learning neural networks to work at significant speeds. And it’s these neural networks that allow machine learning algorithms to spot patterns and make predictions from massive datasets – and even beat the world Go champion, as Google DeepMind did.

But thousands of GPU cores won’t get access to tranches of big data – perhaps many thousands of terabytes – unless they have connectivity that is versatile and robust enough to cope with the demands of a host of very different AI applications.

For instance, in a retail banking scenario, a firm might apply machine learning to datasets of customer records, or information fed back from a smartphone app – or perhaps social media interactions with the bank. Each of these datasets could help the bank optimize their services. Another part of the business, however, may be training algorithms to perform share trading, credit risk management, help with underwriting or seek fraudulent anomalies in transactions.

Such tasks all have different network workloads with a corresponding set of unique demands on enterprise WANs (wide area networks). Not only must these networks have high capacity and low latency for handling near real-time data, they must be capable of being optimised for the task at hand, and manage both structured and unstructured data, too.

But users don’t have to perform this network optimisation alone. Using software-defined networking (SDN) and network function virtualisation (NFV), AI can help increasingly embed network information and make changes to APIs to adapt to various scenarios. This can then potentially lead to self-optimizing networks – enabling financial institutions to adapt to change quickly.

The public and private cloud environments that AI/ML systems tend to inhabit also need a hybrid network so that the data can be accessed. For instance, if the API for a bank’s customer app is running in the firm’s private cloud, but a machine learning algorithm in a public cloud needs training using that app data, the inter-cloud processing will need careful coordination, as well as taking into account privacy and security considerations.

A way to share data across the firm’s private cloud and the algorithms in its public cloud is through the public Internet. However, this is unsecure and not recommended, especially for sensitive personal and financial records of the customer. There are options to bridge this gap, with solutions now able to provide customers with access to leading cloud providers and data centres, as well as the ability to host secure, private clouds. These solutions also help address the security concerns associated with the public Internet.

With the implementation of SDN’s and NFV’s, the financial sector can implement AI/ML systems faster and more securely – giving an edge to such firms venturing into the machine intelligence era.

Latest News