20 Feb 2019

Case studies on artificial intelligence

We are proud to present case studies from members that are pushing the frontier in the development and artificial intelligence.


LG Electronics’ Vision on Artificial Intelligence

Watch as LG’s Chief Technology Officer Dr. IP Park talks about LG’s vision for their future work with artificial intelligence.

LG Keynote speech - CES 2019

Microsoft’s AI for Accessibility

Microsoft’s AI for Accessibility is a Microsoft grant program that harnesses the power of AI to amplify human capability for the more than one billion people around the world with a disability.


Microsft’s 2030 vision on Healthcare, Artificial Intelligence, Data and Ethics

The intersection between technology and health has been an increasing area of focus for policymakers, patient groups, ethicists and innovators. As a company, we foundourselves in the midst of many different discussions with customers in both the privateand public sectors, seeking to harness technology, including cloud computing and AI, all for the end goal of improving human health. Many customers were struggling with the same questions, among them how to be responsible data stewards, how to design tools that advanced social good in ethical ways, and how to promote trust in their digital health-related products and services. […]

Healthcare, Artificial Intelligence, Data and Ethics - a 2030 Vision
READ THE PDF

Finland training & development plan

AI has been extensively discussed in Finland. The University of Helsinki and Reaktor launched a free and public course to educate 1% of the Finnish population on AI by the end of this year. They have challenged companies to train employees on AI during 2018 and many member companies of the Technology Industries of Finland association (e.g. Nokia, Kone, F-Secure) have joined and support the programme. More than 90,000 people have enrolled in these courses.


SAP – Training for boosting people’s AI skills

SAP has made available various Massive Open Online Courses (MOOCs) both for internal and external users, with goals ranging from basic knowledge/awareness building, for example programmes and courses on ‘Enterprise Machine Learning in a Nutshell’ (see: https://open.sap.com/courses/ml1-1), as well as more advanced skills, for instance on deep learning (see: https://open.sap.com/courses/ml2). Two-thirds of SAP’s own machine learning (ML) team is made up of people who already worked for SAP in non-ML roles and then acquired the necessary ML knowledge and skills on the job.


SAP – Addressing bias & ensuring diversity

SAP created a formal internal and diverse AI Ethics & Society Steering Committee. The committee is creating and enforcing a set of guiding principles for SAP to address the ethical and societal challenges of AI. It is comprised of senior leaders from across the entire organisation such as Human resources, Legal, Sustainability and AI Research departments. This interdisciplinary membership helps ensuring diversity of thought when considering how to address concerns around AI, e.g. those related to bias.

AI itself can also help increase diversity in the workplace and eliminate biases. SAP uses, offers and continues to develop AI powered HR services that eliminate biases in the application process. For example, SAP’s “Bias Language Checker” (see: https://news.sap.com/2017/10/sap-introduces-intelligent-hr-solution-to-help-businesses-eliminate-bias/) helps HR identifying areas where the wording of a Job Description lacks inclusivity and may deter a prospective applicant from submitting their application.


Who can be held liable for damages caused by autonomous systems?

AI and robotics have raised some questions regarding liability. Take for example the scenario of an ‘autonomous’ or AI-driven robot moving through a factory. Another robot surprisingly crosses its way and our robot draws aside to prevent collision. However, by this manoeuvre the robot injures a person. Who can be held liable for damages caused by autonomous systems? The manufacturer using the robots, one or both or the robot manufacturers or one of the companies that programmed the software of the robots?

Existing approaches would likely already provide a good approach. For example, owner’s liability, as with motor vehicles, could be introduced for autonomous systems (whereas ‘owner’ means the person using or having used the system for its purposes). The injured party should be able to file a claim for personal or property damages applying strict liability standards against the owner of the autonomous system.


Sony – Neural Network Libraries available in open source 

Sony has made available in open source its “Neural Network Libraries” which serve as framework for creating deep learning programmes for AI. Software engineers and designers can use these core libraries free of charge to develop deep learning programmes and incorporate them into their products and services. This shift to open source is also intended to enable the development community to further build on the core libraries’ programmes.

Deep learning refers to a form of machine learning that uses neural networks modelled after the human brain. By making the switch to deep learning-based machine learning, the past few years have seen a rapid improvement in image and voice recognition technologies, even outperforming humans in certain areas. Compared to conventional forms of machine learning, deep learning is especially notable for its high versatility, with applications covering a wide variety of fields besides image and voice recognition, including machine translation, signal processing and robotics. As proposals are made to expand the scope of deep learning to fields where machine learning has not been traditionally used, there has been an accompanying surge in the number of deep learning developers.

Neural network design is very important for deep learning programme development. Programmers construct the neural network best suited to the task at hand, such as image or voice recognition, and load it into a product or service after optimising the network’s performance through a series of trials. The software contained in these core libraries efficiently facilitates all the above-mentioned development processes.


Cisco – Reinventing the network & making security foundational

Cisco is reinventing networking with the network intuitive. Cisco employs machine learning (ML) to analyse huge amounts of network data and understand anomalies as well as optimal network configurations. Ultimately, Cisco will enable an intent-based, self-driving and selfhealing network. The network will redirect traffic on its own and heal itself from internal shocks, such as device malfunctions, and external shocks, such as cyberattacks.

To simplify wide area network (WAN) deployments and improve performance, ML software observes configuration, telemetry and traffic patterns and recommends optimisation and security measures via a centralised management application. Machine learning plays a role in analysing network data to identify activity indicative of threats such as ransomware, cryptomining and advanced persistent threats within encrypted traffic flows.

Moreover, to help safeguard organisations in a constantly changing threat landscape, Cisco is using AI and ML to support comprehensive, automated, coordinated responses between various security components. For businesses in a multi-cloud environment, cloud access is secured by leveraging machine intelligence to uncover malicious domains, IPs, and URLs before they are even used in attacks. Once a malicious agent is discovered on one network, it is blacklisted across all customer networks. Machine learning is also used to detect anomalies in IT environments in order to safeguard the use of SaaS applications by adaptively learning user behaviour. Infrastructure-as-a-Service instances as well are safeguarded by using machine learning to discover advanced threats and malicious communications.


Intel – AI for cardiology treatment

Precision medicine for cancers requires the delivery of individually-adapted medical care based on the genetic characteristics of each patient. The last decade witnessed the development of high-throughput technologies such as next-generation sequencing, which paved their way in the field of oncology. While the cost of these technologies decreases, we are facing an exponential increase in the amount of data produced. In order to open the access to more and more patients to precision medicine-based therapies, healthcare providers have to rationalise both their data production and utilisation and this requires the implementation of the cuttingedge technology of high-performance computing and artificial intelligence.

Before taking a therapeutic decision based on the genome interpretation of a cancer, the physician can be presented with an overwhelming number of genes variants. In order to identify key actionable variants that can be targeted by treatments, the physician needs tools to sift through this large volume of variants. While the use of AI in genome interpretation is still nascent, it is growing rapidly as acting filter to dramatically reduce the number of variants, providing invaluable help to the physician. The mastering of high-performance computing methods on modern hardware infrastructure is becoming a key factor of the cancer genome interpretation process while being efficient, cost-effective and adjustable over time.

The pioneer collaboration initiated between the Curie Institute Bioinformatics platform and Intel aims at answering those challenges by defining a leading model in France and Europe. This collaboration will grant Institute Curie access to Intel experts for defining highperformance computing and artificial intelligence infrastructure and ensuring its optimisation in order to implement the Intel Genomics ecosystem partner solutions and best practices, for example the Broad Institute for Cancer Genomics pipeline optimisation. Also anticipated is the development of additional tailored tools needed to integrate and analyse heterogeneous biomedical data.


MSD – AI for healthcare professionals

MSD has launched, as part of its MSD Salute programme in Italy, a chatbot for physicians, powered by AI and machine learning. It has already achieved a large uptake with healthcare professionals in Italy. The programme’s sector of focus is immune-oncology.

From the MSD prospective, physicians are digital consumers looking for relevant information for their professional activity. Some key factors like the increase of media availability, mobile devices penetration and the decrease of time available, are resulting in a reduction of time spent navigating and searching on the web. Therefore users (and physicians with their pragmatic approach) read what they see and do not navigate as much but just ‘read and go’.
This means that there is an urgent need to access content quickly, easily and efficiently.

The chatbot is developed in partnership with Facebook and runs on their Messenger app framework. As an easy and practical tool, it helps to establish a conversational relationship between the users. The MSD Italy ChatBot service is available only for registered physicians. Integration with Siri and other voice recognition systems is also worked on, to improve the human experience during the interaction with the chatbot. This initiative is a key item in MSD Italy’s digital strategy which focuses on new channels and touch-points with healthcare professionals, leveraging on new technologies.


Philips – AI in clinics and hospitals

With the clinical introduction of digital pathology, pioneered by Philips, it has become possible to implement more efficient pathology diagnostic workflows. This can help pathologists to streamline diagnostic processes, connect a team, even remotely, to enhance competencies and maximise use of resources, unify patient data for informed decision-making, and gain new insights by turning data into knowledge. Philips is working with PathAI to build deep learning applications. By analysing massive pathology data sets, we are developing algorithms aimed at supporting the detection of specific types of cancer and that inform treatment decisions.

Further, AI and machine learning for adaptive intelligence can also support quick action to address patient needs at the bedside. Manual patient health audits used to be timeconsuming, putting a strain on general ward staff. Nurses need to juggle a range of responsibilities: from quality of care to compliance with hospital standards. Information about the patient’s health was scattered across various records, making it even harder for nurses to focus their attention and take the right actions. Philips monitoring and notification systems assist nurses to detect a patient’s deterioration much quicker. All patient vital signs are automatically captured in one place to provide an Early Warning Score (EWS).


Microsoft – Machine learning for tumour detection and genome research

Microsoft’s Project InnerEye developed machine learning techniques for the automatic delineation of tumours as well as healthy anatomy in 3D radiological images. This technology helps to enable fast radiotherapy planning and precise surgery planning and navigation. Project InnerEye builds upon many years of research in computer vision and machine learning. The software learned how to mark organs and tumours up by training on a robust data set of images for patients that had been seen by experienced consultants.

The current process of marking organs and tumours on radiological images is done by medical practitioners and is very time consuming and expensive. Further, the process is a bottleneck to treatment – the tumour and healthy tissues must be delineated before treatment can begin. The InnerEye technology performs this task much more quickly than when done by hand by clinicians, reducing burdens on personnel and speeding up treatment.

The technology, however, does not replace the expertise of medical practitioners; it is designed to assist them and reduce the time needed for the task. The delineation provided by the technology is designed to be readily refined and adjusted by expert clinicians until completely satisfied with the results. Doctors maintain full control of the results at all times.

Further, Microsoft has partnered with St. Jude Children’s Research Hospital and DNANexus to develop a genomics platform that provides a database to enable researchers to identify how genomes differ. Researchers can inspect the data by disease, publication, gene mutation and also upload and test their own data using the bioinformatics tools. Researchers can progress their projects much faster and more cost-efficiently because the data and analysis run in the cloud, powered by rapid computing capabilities that do not require downloading anything.


Siemens – AI for Industry, Power Grids and Rail Systems

Siemens has been using smart boxes to bring older motors and transmissions into the digital age. These boxes contain sensors and communication interfaces for data transfer. By analysing the data, AI systems can draw conclusions regarding a machine’s condition and detect irregularities in order to make predictive maintenance possible.

AI is used also beyond industrial settings, for example to improve the reliability of power grids by making them smarter and providing the devices that control and monitor electrical networks with AI. This enables the devices to classify and localise disruptions in the grid. A special feature of this system is that the associated calculations are not performed centrally at a data centre, but de-centrally between the interlinked protection devices.

In cooperation with Deutsche Bahn, Siemens is running a pilot project for the predictive maintenance and repair of high-speed trains. Data analysts and software recognise patterns and trends from the vehicles’ operating data. Moreover, AI helps build optimised control centres for switch towers. From the billions of possible hardware configurations for a switch tower, the software selects options that fulfil all the requirements, including those regarding reliable operation.


Schneider Electric – AI for industry applications

Schneider Electric has used AI and machine learning in various sectors. In the oil and gas industry for example, machine learning is steering the operation of Realift rod pump control to monitor and configure pump settings and operations remotely, sending personnel onsite only when necessary for repair or maintenance – when Realift indicates that something has gone wrong. Anomalies in temperature and pressure, for instance, can flag potential problems, even issues brewing a mile below the surface. Intelligence edge devices can run analytics locally without having to tap the cloud — a huge deal for expensive, remote assets such as oil pumps.

To enable this solution an AI model is previously trained to recognise correct pump operation and also different types of failures a pump can experience, the AI model is deployed on a gateway at oil field for each pump and is fed with data collected at each pump stroke. Then, it outputs a prediction regarding the pump state. As we mimic the expert diagnostics, predictions can be easily validated, explained and interpreted.


Schneider Electric – Improving agriculture and farming with AI

Another example is in the agriculture sector, where Schneider Electric has proposed an AI solution for Waterforce, an irrigation solutions builder and water management company in New Zealand. Schneider Electric’ solution makes water use more efficient and effective in water use, saving up to 50% in energy costs, and provides remote monitoring capabilities that reduce the time farmers have to spend driving to inspect assets. The solution is able to collect data, from the weather forecast, pressure of pumps, temperatures, level of water, humidity of the ground, cleaning and selecting quality data, and preparing the data, in order to propose services such as fault diagnosis, performance benchmarking, recommendation and advise on operations.

AI and machine learning therefore represent a new way for humans and machines to work together – to learn about predictive tendencies and to solve complex problems. In the above examples, the challenges presented today in managing a process that requires tight control of temperatures, pressures, and liquid flows is quite complex and prone to error. Many variables need to be factored in to achieve a successful outcome – and the quality of the data that trains the AI algorithms could deliver very different results that the human brain should anyhow interpreted and guide. With the support of AI to make better operational decisions, critical factors such as safety, security, efficiency, productivity, and even profitability can be optimised in conjunction between machine/process and operator. This way, the training and combined skills from AI and expertise are a key success factor to deliver those values to Industry.


Canon – Application of automation in the office environment

Canon’s digital mailroom solution has been at the forefront of Robotic Process Automation (RPA) since it was first launched. A digital mailroom allows all incoming mail to be automatically captured, identified, validated and sent with relevant index data to the right systems or people. RPA technology is centred on removing the mundane to make lives easier. In the P2P world, RPA automates labour-intensive activities that require accessing multiple systems or that need to be audited for compliance.

Canon believes the next step in automation is the intelligent mailroom. The key challenge of the future will be the integration of digital and paper-based information into robust, effective and efficient processes. This means that organisations need more intelligent, digital mailroom solutions that enable data capture across every channel. One example of intelligent mailroom is the Multichannel Advanced Capture. This allows banks to enable customers to apply for an account minimising the amount of paper and using a mobile-friendly web page capturing the core details required. Automated checks on customers’ ID and credit history are made first. If all initial checks are valid, a second human check can be made. The bank is then presented with all the information required to make an informed decision on the application to open the bank account, based on applicable business rules as well as on (automatically) gathered historical business process knowledge.


SAS – Crowdsourcing and analysing data for endangered wildlife

The WildTrack Footprint Identification Technique (FIT) is a tool developed in partnership with SAS for non-invasive monitoring of endangered species through digital images of footprints. Measurements from these images are analysed by customised mathematical models that help to identify the species, individual, sex and age-class. AI could add the ability to adapt through progressive learning algorithms and tell an even more complete story.

Ordinary people would not necessarily be able to dart a rhino, but they can take an image of a footprint. WildTrack therefore has data coming in from everywhere. As this represents too much information to manage manually AI can automate repetitive learning through data, performing frequent, high-volume, computerised tasks reliably and without fatigue.


SAS – Using AI for real-time sports analytics

AI can also be used to analyse sports and football data. For example, SciSports models on-field movements using machine learning algorithms, which by nature improve on performing a task as they gain more experience. It works by automatically assigning a value to each action, such as a corner kick. Over time, these values change based on their success rate. A goal, for example, has a high value, but a contributing action – which may have previously had a low value – can become more valuable as the platform masters the game.

AI and machine learning will play an important role in the future of SciSports and football analytics in general. Existing mathematical models shape existing knowledge and insights in football, while AI and machine learning will make it possible to discover new connections that people would not make themselves.

Various other tools such as SAS Event Stream Processing and SAS Viya can then be utilised for real-time image recognition, with deep learning models, to distinguish between players, referees and the ball. The ability to deploy deep learning models in memory onto cameras and then do the inferencing in real time is cutting-edge science.


Google & TNO – AI for data analysis on traffic safety

TNO is one of the partners of InDeV, an international collaboration of researchers which was created to develop new ways of measuring traffic safety. Statistics about traffic safety were unreliable, insufficiently detailed, and hard to collect. Researchers often resort to filming busy intersections and manually reviewing the recording. This a time-intensive and expensive process. A single intersection needs to be monitored for three weeks with two cameras to create an estimation of its safety, adding up to six weeks of footage, which can take six weeks of work to analyse. Typically, less than one percent of the recorded material is actually of interest to researchers. The job of TNO is to apply machine learning to video of accident-prone hot spots to rate intersections on a scale according to their safety. With TNO’s neural network based on TensorFlow, researchers report that it takes only one hour to review footage that would previously have taken a week to inspect.

20 Feb 2019 Publication & Brochure
A stronger DIGITAL EUROPE
20 Feb 2019 Publication & Brochure
Success indicators for 2025
20 Feb 2019 Publication & Brochure
We call for a STRONGER DIGITAL EUROPE
Hit enter to search or ESC to close
This website uses cookies
We use cookies and similar techonologies to adjust your preferences, analyze traffic and measure the effectiveness of campaigns. You consent to the use of our cookies by continuing to browse this website.
Decline
Accept