This page was exported from Exams Labs Braindumps [ http://blog.examslabs.com ] Export date:Sat Sep 21 11:42:20 2024 / +0000 GMT ___________________________________________________ Title: [Q81-Q99] Latest Microsoft AI-900 First Attempt, Exam real Dumps Updated [Apr-2023] --------------------------------------------------- Latest Microsoft AI-900 First Attempt, Exam real Dumps Updated [Apr-2023] Get the superior quality AI-900 Dumps Questions from ExamsLabs. Nobody can stop you from getting to your dreams now. Your bright future is just a click away! NEW QUESTION 81Match the types of natural languages processing workloads to the appropriate scenarios.To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once, more than once, or not at all.NOTE: Each correct selection is worth one point. ExplanationBox 1: Entity recognitionClassify a broad range of entities in text, such as people, places, organisations, date/time and percentages, using named entity recognition. Whereas:- Get a list of relevant phrases that best describe the subject of each record using key phrase extraction.Box 2: Sentiment analysisSentiment Analysis is the process of determining whether a piece of writing is positive, negative or neutral.Box 3: TranslationUsing Microsoft’s Translator text APIThis versatile API from Microsoft can be used for the following:Translate text from one language to another.Transliterate text from one script to another.Detecting language of the input text.Find alternate translations to specific text.Determine the sentence length.Reference:https://azure.microsoft.com/en-us/services/cognitive-services/text-analyticsNEW QUESTION 82You need to reduce the load on telephone operators by implementing a chatbot to answer simple questions with predefined answers.Which two AI service should you use to achieve the goal? Each correct answer presents part of the solution.NOTE: Each correct selection is worth one point.  Text Analytics  QnA Maker  Azure Bot Service  Translator Text Section: Describe features of conversational AI workloads on AzureExplanation:Bots are a popular way to provide support through multiple communication channels. You can use the QnA Maker service and Azure Bot Service to create a bot that answers user questions.Reference:https://docs.microsoft.com/en-us/learn/modules/build-faq-chatbot-qna-maker-azure-bot-service/NEW QUESTION 83To complete the sentence, select the appropriate option in the answer area. Reference:https://azure.microsoft.com/en-us/services/cognitive-services/form-recognizer/NEW QUESTION 84You have the process shown in the following exhibit.Which type AI solution is shown in the diagram?  a sentiment analysis solution  a chatbot  a machine learning model  a computer vision application NEW QUESTION 85Match the Azure Cognitive Services to the appropriate Al workloads.To answer, drag the appropriate service from the column on the left to its workload on the right. Each service may be used once, more than once, or not at all.NOTE: Each correct match is worth one point. ExplanationNEW QUESTION 86What are two tasks that can be performed by using computer vision? Each correct answer presents a complete solution.NOTE: Each correct selection is worth one point.  Predict stock prices.  Detect brands in an image.  Detect the color scheme in an image  Translate text between languages.  Extract key phrases. Section: Describe features of computer vision workloads on AzureExplanation:B: Azure’s Computer Vision service gives you access to advanced algorithms that process images and return information based on the visual features you’re interested in. For example, Computer Vision can determine whether an image contains adult content, find specific brands or objects, or find human faces.E: Computer Vision includes Optical Character Recognition (OCR) capabilities. You can use the new Read API to extract printed and handwritten text from images and documents. It uses the latest models and works with text on a variety of surfaces and backgrounds. These include receipts, posters, business cards, letters, and whiteboards. The two OCR APIs support extracting printed text in several languages.Reference:https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overviewNEW QUESTION 87Which metric can you use to evaluate a classification model?  true positive rate  mean absolute error (MAE)  coefficient of determination (R2)  root mean squared error (RMSE) What does a good model look like?An ROC curve that approaches the top left corner with 100% true positive rate and 0% false positive rate will be the best model. A random model would display as a flat line from the bottom left to the top right corner. Worse than random would dip below the y=x line.Reference:https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#classificationNEW QUESTION 88Match the types of computer vision to the appropriate scenarios.To answer, drag the appropriate workload type from the column on the left to its scenario on the right. Each workload type may be used once, more than once, or not at all.NOTE: Each correct selection is worth one point. ExplanationBox 1: Facial recognitionFace detection that perceives faces and attributes in an image; person identification that matches an individual in your private repository of up to 1 million people; perceived emotion recognition that detects a range of facial expressions like happiness, contempt, neutrality, and fear; and recognition and grouping of similar faces in images.Box 2: OCRBox 3: Objection detectionObject detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same tag in an image.The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only finds objects and living things, while the Tag API can also include contextual terms like“indoor”, which can’t be localized with bounding boxes.Reference:https://azure.microsoft.com/en-us/services/cognitive-services/face/https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-object-detectionNEW QUESTION 89To complete the sentence, select the appropriate option in the answer area.Using Recency, Frequency, and Monetary (RFM) values to identify segments of a customer base is an example of___________ SeethebelowinExplanation:ClassificationNEW QUESTION 90To complete the sentence, select the appropriate option in the answer area. ExplanationAccelerate your business processes by automating information extraction. Form Recognizer applies advanced machine learning to accurately extract text, key/value pairs, and tables from documents. With just a few samples, Form Recognizer tailors its understanding to your documents, both on-premises and in the cloud.Turn forms into usable data at a fraction of the time and cost, so you can focus more time acting on the information rather than compiling it.Reference:https://azure.microsoft.com/en-us/services/cognitive-services/form-recognizer/NEW QUESTION 91For each of the following statements, select Yes if the statement is true. Otherwise, select No.NOTE: Each correct selection is worth one point. ExplanationBox 1: YesIn machine learning, if you have labeled data, that means your data is marked up, or annotated, to show the target, which is the answer you want your machine learning model to predict.In general, data labeling can refer to tasks that include data tagging, annotation, classification, moderation, transcription, or processing.Box 2: NoBox 3: NoAccuracy is simply the proportion of correctly classified instances. It is usually the first metric you look at when evaluating a classifier. However, when the test data is unbalanced (where most of the instances belong to one of the classes), or you are more interested in the performance on either one of the classes, accuracy doesn’t really capture the effectiveness of a classifier.Reference:https://www.cloudfactory.com/data-labeling-guidehttps://docs.microsoft.com/en-us/azure/machine-learning/studio/evaluate-model-performanceNEW QUESTION 92To complete the sentence, select the appropriate option in the answer area. ExplanationReference:https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-object-detectionNEW QUESTION 93To complete the sentence, select the appropriate option in the answer area. Reference:https://docs.microsoft.com/en-us/learn/modules/responsible-ai-principles/4-guiding-principlesNEW QUESTION 94What are two tasks that can be performed by using the Computer Vision service? Each correct answer presents a complete solution.NOTE: Each correct selection is worth one point.  Train a custom image classification model.  Detect faces in an image.  Recognize handwritten text.  Translate the text in an image between languages. ExplanationB: Azure’s Computer Vision service provides developers with access to advanced algorithms that process images and return information based on the visual features you’re interested in. For example, Computer Vision can determine whether an image contains adult content, find specific brands or objects, or find human faces.C: Computer Vision includes Optical Character Recognition (OCR) capabilities. You can use the new Read API to extract printed and handwritten text from images and documents.Reference:https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/home Detect faces in an image – Face API Microsoft Azure provides multiple cognitive services that you can use to detect and analyze faces, including:Computer Vision, which offers face detection and some basic face analysis, such as determining age.Video Indexer, which you can use to detect and identify faces in a video.Face, which offers pre-built algorithms that can detect, recognize, and analyze faces.Recognize hand written text – Read APIThe Read API is a better option for scanned documents that have a lot of text. The Read API also has the ability to automatically determine the proper recognition modelNEW QUESTION 95To complete the sentence, select the appropriate option in the answer area. ExplanationReliability and safety: To build trust, it’s critical that AI systems operate reliably, safely, and consistently under normal circumstances and in unexpected conditions. These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and resist harmful manipulation.Reference:https://docs.microsoft.com/en-us/learn/modules/responsible-ai-principles/4-guiding-principlesNEW QUESTION 96You need to develop a web-based AI solution for a customer support system. Users must be able to interact with a web app that will guide them to the best resource or answer.Which service should you use?  Custom Vision  QnA Maker  Translator Text  Face ExplanationQnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semistructured content, including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge base-automatically. Your knowledge base gets smarter, too, as it continually learns from user behavior.Reference:https://azure.microsoft.com/en-us/services/cognitive-services/qna-maker/NEW QUESTION 97You send an image to a Computer Vision API and receive back the annotated image shown in the exhibit.Which type of computer vision was used?  object detection  semantic segmentation  optical character recognition (OCR)  image classification ExplanationObject detection is similar to tagging, but the API returns the bounding box coordinates (in pixels) for each object found. For example, if an image contains a dog, cat and person, the Detect operation will list those objects together with their coordinates in the image. You can use this functionality to process the relationships between the objects in an image. It also lets you determine whether there are multiple instances of the same tag in an image.The Detect API applies tags based on the objects or living things identified in the image. There is currently no formal relationship between the tagging taxonomy and the object detection taxonomy. At a conceptual level, the Detect API only finds objects and living things, while the Tag API can also include contextual terms like“indoor”, which can’t be localized with bounding boxes.Reference:https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-object-detectionNEW QUESTION 98For each of the following statements, select Yes If the statement is true. Otherwise, select No.NOTE: Each correct selection is worth one point. ExplanationNEW QUESTION 99To complete the sentence, select the appropriate option in the answer area.Using Recency, Frequency, and Monetary (RFM) values to identify segments of a customer base is an example of___________  Loading … Guaranteed Success with Valid Microsoft AI-900 Dumps: https://www.examslabs.com/Microsoft/Microsoft-Certified-Azure-AI-Fundamentals/best-AI-900-exam-dumps.html --------------------------------------------------- Images: https://blog.examslabs.com/wp-content/plugins/watu/loading.gif https://blog.examslabs.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2023-04-03 14:30:31 Post date GMT: 2023-04-03 14:30:31 Post modified date: 2023-04-03 14:30:31 Post modified date GMT: 2023-04-03 14:30:31