Meta Description: The first release candidate for Microsoft’s Cognitive Toolkit 2.0 includes more than 100 features, enhancements and bug fixes added since the October 2016 beta edition.
Microsoft’s Cognitive Toolkit 2.0 (CNTK) just hit a major milestone. The artificial intelligence (AI) toolkit is now out of beta and a major step closer to an official launch with this week’s availability of the offering’s first release candidate.
Cognitive Toolkit 2.0 (CNTK), formerly the Computational Network Toolkit (also CNTK), is Microsoft’s free and open-source deep learning system, an offshoot of machine learning, used for image and speech recognition as well as improving search relevance using conventional CPUs or Nvidia graphical processing units.
Since the beta release in October 2016, Microsoft has added over 100 features, enhancements and bug fixes, which have been rolled up into the first release candidate (RC1).
The scalable system can tackle datasets small enough for a laptop or large enough to require a multi-server configuration in a data center. CNTK works with both on-premises systems or in Microsoft’s cloud using Azure virtual machines that tap the power of GPUs to speed up AI processing.
Microsoft has been using the technology internally to handle massive AI workloads, including the company’s chatbots, speech recognition systems and the Cortana virtual assistant.
CNTK 2.0 RC1 supports both Windows and Linux, includes performance improvements and a tidier memory footprint, enabling it to run more efficiently. CNTK is also now available as a Docker image, which enables users to set up the solution as a Docker container on Linux systems.
The company also added a dash of automation to CNTK’s installation process, wrote Chris Basoglu, partner engineering manager of the AI and Research group at Microsoft, in an April 3 blog post. Options include new installation scripts are targeted at users who are comfortable working with CNTK source code.
According to the latest release notes, CNTK includes updated APIs (application programming interfaces) and new model debugging functions for Python. CNTK 2.0 RC 1 documentation and downloads are available at GitHub.
Deep learning is becoming a competitive battleground for today’s tech titans.
Last month, during its Google Cloud Next ’17 conference in San Francisco, the company announced it had acquired Kaggle. The platform used by hundreds of thousands of data scientists and machine learning experts to build machine learning models using public datasets.
Apart from the Kaggle’s practical contributions to Google’s growing AI ecosystem, the buy is also indicative of the growing importance of deep learning to engineers and data scientists in today’s technology market.
“Getting access to this talented community is a major advantage, as the engineers make decisions around which platforms and frameworks to use when building their products,” remarked Mikhail Naumov, co-founder of AI specialist DigitalGenius, in a statement addressing the deal. “Clearly, adoption of Google’s TensorFlow and Google Cloud Platform will grow as a result of this acquisition,” he said.
TensorFlow is Google’s scalable open-source machine learning technology used in its Smart Reply feature in Gmail, Google Translate and other intelligent services across the company’s portfolio. The search giant open-sourced the technology in 2015 and released a distributed computing version last year.