
Amy Lokey, chief expertise officer at ServiceNow.
Picture: ServiceNow
ServiceNow is a $9 billion platform-as-a-service supplier. Nearly 20 years outdated, the Santa-Clata, Calif.-based firm centered initially on IT service administration, a strategic method to managing and delivering IT companies inside a corporation primarily based on enterprise objectives.
Over time, it is change into a full enterprise cloud platform, with a variety of IT, operations, enterprise administration, HR, and customer support choices. Extra just lately, it has absolutely embraced AI, rebranding itself with the tagline, “Put AI to work with ServiceNow.”
Additionally: Generative AI can transform customer experiences. But only if you focus on other areas first
In Might, ServiceNow announced a suite of generative AI capabilities tailor-made to enterprise administration. As with most large-scale AI implementations, there are quite a lot of questions and alternatives that come up from widespread AI deployment.
ZDNET had the chance to talk with Amy Lokey, chief expertise officer at ServiceNow. Previous to her position at ServiceNow, Lokey served as VP for consumer expertise — first at Google after which at LinkedIn. She was additionally a consumer expertise designer at Yahoo!
Let’s get began.
ZDNET: Please introduce your self and clarify your position as chief expertise officer at ServiceNow.
Amy Lokey: I’ve probably the most rewarding roles at ServiceNow. I lead the worldwide Expertise workforce. We concentrate on making ServiceNow easy, intuitive, and interesting to make use of.
Utilizing enterprise software program at work needs to be as elegant as any shopper expertise, so my workforce contains consultants in design, analysis, product documentation, and strategic operations. Our mission is to create product experiences that individuals love, making their work simpler, extra productive, and even pleasant.
ZDNET: What are the first obligations of the chief expertise officer, and the way do they intersect with AI initiatives at ServiceNow?
AL: The title, chief expertise officer, is comparatively new at ServiceNow. After I joined nearly 5 years in the past, we have been within the early phases of our expertise journey. Our platform has been making work work higher for 15 years.
My job was to make the consumer expertise match the facility of the product. This method is essential to our enterprise technique. ServiceNow is an expertise layer that may assist customers handle work and full duties throughout different enterprise purposes. We will simplify how folks do their work, and to do this, we have to be user-experience-driven in our method and what we ship for our clients.
Additionally: 6 ways AI can help launch your next business venture
In the present day, a important a part of my position is to work with our Product and Engineering groups to be sure that generative AI, embedded within the ServiceNow platform, unlocks new requirements of usefulness and self-service. For instance, enabling customer support brokers to summarize case notes. This seemingly easy characteristic helps lower our personal brokers’ case decision time in half.
That is what makes AI experiences really magical: making folks extra productive, to allow them to do the work that is significant, fairly than mundane.
ZDNET: Are you able to elaborate on ServiceNow’s method to growing AI ethically, specializing in human-centricity, inclusivity, transparency, and accountability?
AL: These rules are on the coronary heart of the whole lot we do, making certain that our AI options genuinely improve folks’s work experiences in significant methods.
Before everything, we place folks on the middle of AI improvement. This features a “human-in-the-loop” course of that permits customers to guage and modify what AI suggests, to make sure it meets their particular wants. We intently monitor usefulness by means of in-product suggestions mechanisms and ongoing consumer expertise analysis, permitting us to repeatedly refine and improve our merchandise to satisfy the wants of the individuals who use them.
Inclusivity can also be important, and it speaks on to ServiceNow’s core worth of “have fun range; create belonging.” Our AI fashions are most frequently domain-specific: skilled and examined to replicate and accommodate the unimaginable quantity of people that use our platform and the principle use circumstances for ServiceNow.
Additionally: We need bold minds to challenge AI, not lazy prompt writers, bank CIO says
With a buyer base of greater than 8,100 enterprises, we additionally leverage various datasets to cut back the chance of bias in AI. All of that is underscored by our broad-based, customer-supported AI analysis and design program that places, and retains, inclusivity on the forefront of all our product experiences.
Transparency builds belief. We deliberately create product documentation that’s each complete and clear. Generative AI is constructed instantly into the Now Platform, and we wish clients to know the way it works and perceive that they are in management.
When designing our product experiences, we make it clear the place Now Help GenAI is accessible and permit folks to determine when and the way they use it. Our just lately revealed Responsible AI Guidelines handbook is a testomony to this dedication, providing assets to assist clients consider their AI use and guarantee it stays moral and reliable.
Lastly, accountability is the cornerstone of our AI experiences. We take our obligations relating to AI critically and have adopted an oversight construction for governance. We collaborate with exterior consultants and the broader AI group to assist refine and pressure-test our method. We even have an inside Information Ethics Committee and Governance Council that critiques the use circumstances for the know-how.
ZDNET: In what methods does ServiceNow guarantee inclusivity in its AI improvement course of?
AL: Whereas AI has great potential to make the world a greater, extra inclusive place, that is solely attainable if inclusivity is taken into account deliberately as a part of the AI technique from the beginning. Not solely will we comply with this precept, however we additionally frequently overview and refine our AI mannequin datasets throughout improvement to be sure that they replicate the variety of our clients and their finish customers.
Whereas we provide clients a alternative of fashions, our major AI mannequin technique is domain-specific. We practice smaller fashions on particular knowledge units, which helps weed out bias, considerably reduces hallucinations, and improves total accuracy in comparison with general-purpose fashions.
ZDNET: What measures does ServiceNow take to keep up transparency in its AI tasks?
AL: We take a really hands-on method to selling open-science, open-source, open-governance AI improvement. For instance, we have partnered with main analysis organizations which might be engaged on a few of the world’s greatest AI initiatives. This contains our work with Nvidia and Hugging Face to launch Starcoder2, a gaggle of LLMs with open improvement that may be custom-made by organizations as they see match.
We’re additionally founding members of the AI Alliance, which incorporates members throughout academia, analysis, science, and trade, all of whom are devoted to advancing AI that’s open and accountable. Moreover, now we have internally invested in AI analysis and improvement. Our Analysis workforce has revealed greater than 70 research on generative AI and LLMs, which have knowledgeable the work our Product Improvement workforce and Information Ethics Committee are doing.
Additionally: Generative AI is new attack vector endangering enterprises, says CrowdStrike CTO
On a day-to-day foundation, transparency comes all the way down to communication. After we take into consideration how we talk about AI with clients and their finish customers, we over-communicate each the boundaries and the meant utilization of AI options to present them the perfect, most correct image of the instruments we offer.
This encompasses mechanisms, together with mannequin playing cards we have created, that are up to date with all our scheduled releases and clarify every AI mannequin’s particular context, coaching knowledge, dangers, and limitations.
We additionally construct belief by labeling responses that have been supplied by LLMs within the UI in order that customers know that they have been AI-generated and by citing sources so clients can perceive how the LLM got here to that conclusion or discovered data.
ZDNET: Are you able to present examples of how ServiceNow’s Accountable AI Pointers have been carried out in current tasks?
AL: Our Accountable AI Pointers handbook serves as a sensible device to foster deeper, important conversations between our clients and their cross-functional groups.
We utilized our pointers to Now Assist, our generative AI expertise. Our Design workforce makes use of them as a north star to make sure that our AI improvements are human-centric. For instance, when designing generative AI summarization, they referenced these rules and created acceptance standards primarily based on them. Moreover, to bolster our core precept of transparency, we’re additionally publishing mannequin playing cards for all Now Help capabilities.
Additionally: The ethics of generative AI: How we can harness this powerful technology
We now have additionally developed an intensive AI product expertise sample and requirements library that adheres to the rules and contains steering on issues like generative AI expertise patterns, AI predictions, suggestions mechanisms to assist human suggestions, toxicity dealing with, prompting, and extra.
Throughout our product expertise critiques, we use the rules to ask our groups important audit questions to make sure our AI-driven experiences are useful and function responsibly and ethically for our clients. A number of groups at ServiceNow have used the rules as reference for insurance policies and different work. For instance, the core worth pillars of our pointers play an necessary position in our ongoing AI governance improvement processes.
Our Analysis workforce references particular pointers throughout the handbook to formulate analysis questions, supply suggestions to product groups, and supply precious assets that inform product design and improvement, all whereas advocating for human-centered AI.
Most significantly, we acknowledge these pointers are a residing useful resource and we’re actively participating with our clients to collect suggestions, permitting us to iterate and evolve our pointers frequently. This collaborative method ensures our pointers stay related and efficient in selling accountable AI practices.
ZDNET: What steps does ServiceNow take to assist clients perceive and use AI responsibly and successfully? How does ServiceNow make sure that its AI options align with the moral requirements and values of its clients?
AL: Merely put, we construct software program we all know our clients can use. We discuss with clients throughout a variety of industries, and we run ServiceNow on ServiceNow. We’re assured that we and our clients have what is required within the Now Platform to have the ability to meet inside and exterior necessities.
We construct fashions to satisfy particular use circumstances and know what we’re fixing for, all aligned to our accountable AI practices. As a result of we’re a platform, clients do not need to piece collectively particular person options. Prospects leverage the excellent assets we have created for accountable AI proper out of the field.
Additionally: How Deloitte navigates ethics in the AI-driven workforce: Involve everyone
ZDNET: What challenges do corporations face when speaking their use of AI to clients and companions, and the way can they overcome these challenges?
AL: One of many greatest challenges corporations face is misunderstanding. There’s quite a lot of concern round AI, however on the finish of the day, it is a device like anything. The important thing to speaking about the usage of AI is to be clear and direct.
At ServiceNow, we articulate each the potential and the boundaries of AI in our merchandise to our clients from the beginning. This sort of open, sincere dialogue goes a good distance towards overcoming considerations and setting expectations.
ZDNET: How can companies steadiness the advantages of AI with the necessity to preserve stakeholder belief?
AL: For AI to be trusted, it must be useful. Displaying stakeholders, whether or not they’re an worker, a buyer, a accomplice, or something in-between, how AI can be utilized to enhance their experiences is completely important to driving each belief and adoption.
Additionally: AI leaders urged to integrate local data models for diversity’s sake
ZDNET: How can corporations make sure that their AI initiatives are inclusive and profit a various vary of customers?
AL: The significance of participating a various workforce merely cannot be overstated. The usage of AI has implications for everybody, which suggests everybody wants a seat on the desk. Each firm implementing AI ought to prioritize speaking and taking suggestions from any viewers that the answer will affect. AI would not work in a silo, so it should not be developed inside one both!
At ServiceNow, we lead by instance and take care to be sure that our groups who develop AI options are various, representing a variety of individuals and viewpoints. As an illustration, now we have an Worker Accessibility Panel that helps validate and take a look at new options early within the improvement course of in order that they work effectively for these with completely different talents.
ZDNET: What are some greatest practices for corporations seeking to develop and deploy AI responsibly?
AL: Finally, corporations needs to be considerate and strategic about when, the place, and how one can use AI. Listed here are three key concerns to assist them achieve this:
- Incorporate human experience and suggestions: Practices corresponding to consumer expertise analysis needs to be executed all through the method of growing and deploying AI, and proceed on an ongoing foundation post-deployment. That manner, corporations can higher make sure that AI use circumstances are at all times centered on making work higher for human beings.
- Give extra controls to customers: This could embody permitting customers to simply accept and overview AI-generated outputs earlier than accepting them or having the ability to flip off generative AI capabilities inside merchandise. This helps preserve transparency and permits customers management over how they wish to work together with AI.
- Be sure that documentation is evident: Whether or not it is mannequin playing cards that designate every particular AI mannequin’s context, or labeling AI-generated outputs, it is necessary that finish customers are conscious of when they’re interacting with AI and the context behind the know-how.
ZDNET: What are the long-term objectives for AI improvement at ServiceNow, and the way do they align with moral concerns?
AL: The great thing about the Now Platform is that our clients have a one-stop store the place they’ll apply generative AI to each important enterprise perform, which drives tangible outcomes. Generative AI has moved from experimentation to implementation. Our clients are already utilizing it to drive productiveness and value effectivity.
Additionally: Master AI with no tech skills? Why complex systems demand diverse learning
Our focus is on how we enhance day-to-day work for patrons and finish customers by serving to them to work smarter, sooner, and higher. AI augments the work we already do. We’re deeply dedicated to advancing its use responsibly.
It is crucial to how we design our merchandise, and we’re dedicated to serving to our clients make the most of it responsibly as effectively.
ZDNET: What recommendation would you give to different corporations seeking to advance AI responsibly?
AL: Accountable AI improvement should not be a one-time test field, however an ongoing, long-term precedence. As AI continues to evolve, corporations needs to be nimble and able to adapt to new challenges and questions from stakeholders with out shedding sight of the 4 key rules:
- Construct AI with people on the core.
- Prioritize inclusivity.
- Be clear.
- Stay accountable throughout your clients, workers, and humanity writ massive.
Remaining ideas
ZDNET’s editors and I wish to share an enormous shoutout to Amy for taking the time to interact on this interview. There’s quite a lot of meals for thought right here. Thanks, Amy!
What do you assume? Did Amy’s suggestions offer you any concepts about how one can deploy and scale AI responsibly inside your group? Tell us within the feedback beneath.
You possibly can comply with my day-to-day challenge updates on social media. You’ll want to subscribe to my weekly update newsletter, and comply with me on Twitter/X at @DavidGewirtz, on Fb at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.