IT plays a critical role in setting up companies for success in artificial intelligence. Learning from early adopters’ best practices can help enterprises sidestep common pitfalls when starting new AI projects.
A few predictable issues are often at play when new AI initiatives stall out. Some of the most common challenges are hitting snags that delay projects from getting started, not having the right AI infrastructure and tools, workflow bottlenecks that stifle data scientist productivity, and failing to control costs.
Companies seeing the most value from AI have implemented a number of best practices across systems, software, and trusted advisors. These lessons can speed AI deployments across a broad range of use cases, such as computer vision to enhance safety and manufacturing uptime with predictive maintenance, recommender systems to help grow sales, and conversational AI services to boost customer satisfaction.
Here are four things innovators are doing to succeed and boost the bottom line impact of AI.
1) Don’t reinvent the wheel: Use proven tools to save developer cycles and kickstart projects
AI model prototyping, development and testing can be very time- and resource-intensive. Starting from scratch when building a new model can add months to the project timeline. Leveraging proven tools can enhance productivity while speeding ROI.
Ready-made AI software, including pretrained models and scripts for popular use cases such as speech recognition, computer vision, and recommender systems, reduces the amount of software engineering required, so projects can be ready for production faster.
Additionally, purpose-built AI infrastructure ensures that IT can offer the resources needed for AI development, supporting the unique demands of AI workloads. Unlike legacy infrastructure, purpose-built AI infrastructure achieves the optimal design-balance of compute, networking, and storage to speed AI model training, while ensuring data scientists don’t waste valuable cycles moonlighting as systems integrators, software engineers, and tech support.
2) Tap proven expertise and platforms that can grow with you
AI systems are built by training increasingly complex models on large datasets that tend to grow exponentially. This means that enterprise AI requires powerful infrastructure to deliver the fastest model training and real-time inference once AI is running in production applications. To ensure AI-infused businesses can grow, IT needs a path to scale – and expert assistance along the way.
While AI development requires advanced computing infrastructure, not every organization has access to an AI-ready data center or the facilities to support scaled infrastructure. There are now many options that help enterprises test projects before making a big commitment, as well as partners who can offer permanent infrastructure hosting to power your enterprise.
Colocation providers who are certified in running AI infrastructure are ideal for those who don’t have an AI-ready data center of their own. Some even provide infrastructure on a rental basis to help companies experience high-performance AI development infrastructure before making a big investment.
Expertise is also essential, especially as questions arise related to use cases, models, frameworks, libraries, and more. Having direct access to experts in full-stack AI can ensure the fastest path to getting answers that keep your project moving forward.
Qualified AI companies and solution delivery partners can help enterprises right-size their system requirements to help them get started. Look for vendors who work with other trusted technology providers to make sure your needs will be met across the entire spectrum of high performance computing, networking and storage.
3) Own the base, rent the spike to avoid blowing the budget
Given that AI is powered by data, it’s critical to consider where that data is stored when developing your platform and infrastructure strategy. Not only is having large amounts of data the fuel for AI model development, the process of model training and retraining never truly ends, since production applications can drift and lose accuracy over time. Therefore, IT teams need to consider the data pipeline and the amount of time and effort that is continually spent moving large datasets from where they’re created to where compute resources reside.
Data gravity — data’s ability to attract additional applications and services — comes into play here. As models become more complex and data scientists iterate more on their models, enterprises hit an inflection point where moving data around starts to significantly drive up costs. This is especially true if the organization is cloud-first or cloud-only in its approach. Organizations can keep costs in check by training where their data lives to achieve the lowest cost-per-training run.
When the need arises, such as when the development cycle moves from productive experimentation into scaled, ongoing training, a hybrid model that can straddle both cloud and on-premises resources may make sense. In hybrid architectures, an organization will size its own on-prem infrastructure according to the steady-state demand from the business, and additionally procure cloud resources to support temporary demands that exceed that capacity.
This “own the base, rent the spike” approach offers the best of both worlds: lowest fixed-cost infrastructure for day-to-day demands, and on-demand scalability in the cloud for temporary or seasonal surges.
4) Build an AI center of excellence, and make AI a team sport
AI is a rapidly growing field, but it can still be tough to source professionals who already have deep domain expertise. In fact, a recent Deloitte study found that 68 percent of surveyed executives described their organization’s skills gap as “moderate to extreme,” with 27 percent rating it as “major” or “extreme.”
The reality is, the experts who can build your best AI applications are already working for you. They’re inside your business units, and they know your problems and data better than anyone. Many of them want to evolve into data scientists, but need mentoring and an environment where they can learn valuable skills while shadowing other experts in your organization.
Establishing an AI “center of excellence” creates an environment in which your organization can consolidate people, processes, and platforms, enabling you to groom and scale data science expertise from within, saving a lot of money compared to bringing in new hires.
Organizations that have successfully adopted AI are distinguished by their ability to de-risk their AI projects with the right partners, tools, software, and AI infrastructure from the start.
With this solid foundation in place, companies can make their data scientists and AI developers productive immediately, enabling them to innovate without worrying about costs or resource availability.
Adopting these four best practices will help IT lead their companies to uncover insights faster and speed the success of their AI initiatives.
About the Author:
Tony Paikeday, senior director of AI Systems at NVIDIA