This page was exported from Exams Labs Braindumps [ http://blog.examslabs.com ] Export date:Thu Nov 21 21:36:31 2024 / +0000 GMT ___________________________________________________ Title: [Q91-Q109] Pass Your Google Cloud Certified Professional-Cloud-Architect Exam Easily with Accurate PDF Questions [Jul 31, 2022] --------------------------------------------------- Pass Your Google Cloud Certified Professional-Cloud-Architect Exam Easily with Accurate PDF Questions [Jul 31, 2022] Professional-Cloud-Architect Certification Exam Dumps Questions in here QUESTION 91Case Study: 3 – JencoMart Case StudyCompany OverviewJencoMart is a global retailer with over 10,000 stores in 16 countries. The stores carry a range of goods, such as groceries, tires, and jewelry. One of the company’s core values is excellent customer service. In addition, they recently introduced an environmental policy to reduce their carbon output by 50% over the next 5 years.Company BackgroundJencoMart started as a general store in 1931, and has grown into one of the world’s leading brands known for great value and customer service. Over time, the company transitioned from only physical stores to a stores and online hybrid model, with 25% of sales online. Currently, JencoMart has little presence in Asia, but considers that market key for future growth.Solution ConceptJencoMart wants to migrate several critical applications to the cloud but has not completed a technical review to determine their suitability for the cloud and the engineering required for migration. They currently host all of these applications on infrastructure that is at its end of life and is no longer supported.Existing Technical EnvironmentJencoMart hosts all of its applications in 4 data centers: 3 in North American and 1 in Europe, most applications are dual-homed.JencoMart understands the dependencies and resource usage metrics of their on-premises architecture.Application Customer loyalty portalLAMP (Linux, Apache, MySQL and PHP) application served from the two JencoMart-owned U.S.data centers.Database* Oracle Database stores user profiles* PostgreSQL database stores user credentials-homed in US WestAuthenticates all usersCompute* 30 machines in US West Coast, each machine has:HDD (RAID 1)* 20 machines in US East Coast, each machine has:-core CPUStorage* Access to shared 100 TB SAN in each location* Tape backup every weekBusiness Requirements* Optimize for capacity during peak periods and value during off-peak periods* Guarantee service availably and support* Reduce on-premises footprint and associated financial and environmental impact.* Move to outsourcing model to avoid large upfront costs associated with infrastructure purchase* Expand services into Asia.Technical Requirements* Assess key application for cloud suitability.* Modify application for the cloud.* Move applications to a new infrastructure.* Leverage managed services wherever feasible* Sunset 20% of capacity in existing data centers* Decrease latency in AsiaCEO StatementJencoMart will continue to develop personal relationships with our customers as more people access the web. The future of our retail business is in the global market and the connection between online and in-store experiences. As a large global company, we also have a responsibility to the environment through ‘green’ initiatives and polices.CTO StatementThe challenges of operating data centers prevents focus on key technologies critical to our long- term success. Migrating our data services to a public cloud infrastructure will allow us to focus on big data and machine learning to improve our service customers.CFO StatementSince its founding JencoMart has invested heavily in our data services infrastructure. However, because of changing market trends, we need to outsource our infrastructure to ensure our long- term success. This model will allow us to respond to increasing customer demand during peak and reduce costs.For this question, refer to the JencoMart case studyA few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly. What three steps should you take to diagnose the problem? Choose 3 answers  Delete the virtual machine (VM) and disks and create a new one.  Delete the instance, attach the disk to a new VM, and investigate.  Take a snapshot of the disk and connect to a new machine to investigate.  Check inbound firewall rules for the network the machine is connected to.  Connect the machine to another network with very simple firewall rules and investigate.  Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate. D: Handling “Unable to connect on port 22” error messagePossible causes include:There is no firewall rule allowing SSH access on the port. SSH access on port 22 is enabled on* all Compute Engine instances by default. If you have disabled access, SSH from the Browser will not work. If you run sshd on a port other than 22, you need to enable the access to that port with a custom firewall rule.The firewall rule allowing SSH access is enabled, but is not configured to allow connections* from GCP Console services. Source IP addresses for browser-based SSH sessions are dynamically allocated by GCP Console and can vary from session to session.F: Handling “Could not connect, retrying…” errorYou can verify that the daemon is running by navigating to the serial console output page and looking for output lines prefixed with the accounts-from-metadata: string. If you are using a standard image but you do not see these output prefixes in the serial console output, the daemon might be stopped. Reboot the instance to restart the daemon.References:https://cloud.google.com/compute/docs/ssh-in-browserhttps://cloud.google.com/compute/docs/ssh-in-browserQUESTION 92A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files The database is about to run out of storage space How can you remediate the problem with the least amount of downtime?  In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.  Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine.  In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux.  In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk.  In the Cloud Platform Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service. QUESTION 93You want to enable your running Google Kubernetes Engine cluster to scale as demand for your application changes.What should you do?  Add additional nodes to your Kubernetes Engine cluster using the following command:gcloud container clusters resizeCLUSTER_Name – -size 10  Add a tag to the instances in the cluster with the following command:gcloud compute instances add-tagsINSTANCE – -tags enable-autoscaling max-nodes-10  Update the existing Kubernetes Engine cluster with the following command:gcloud alpha container clustersupdate mycluster – -enable-autoscaling – -min-nodes=1 – -max-nodes=10  Create a new Kubernetes Engine cluster with the following command:gcloud alpha container clusterscreate mycluster – -enable-autoscaling – -min-nodes=1 – -max-nodes=10and redeploy your application Explanationhttps://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler To enable autoscaling for an existing node pool, run the following command:gcloud container clusters update [CLUSTER_NAME] –enable-autoscaling –min-nodes 1 –max-nodes 10–zone [COMPUTE_ZONE] –node-pool default-poolQUESTION 94You are monitoring Google Kubernetes Engine (GKE) clusters in a Cloud Monitoring workspace. As a Site Reliability Engineer (SRE), you need to triage incidents quickly. What should you do?  Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies.  Navigate the predefined dashboards in the Cloud Monitoring workspace, create custom metrics, and install alerting software on a Compute Engine instance.  Write a shell script that gathers metrics from GKE nodes, publish these metrics to a Pub/Sub topic, export the data to BigQuery, and make a Data Studio dashboard.  Create a custom dashboard in the Cloud Monitoring workspace for each incident, and then add metrics and create alert policies. Reference: https://cloud.google.com/monitoring/charts/dashboardsQUESTION 95Your company is developing a new application that will allow globally distributed users to upload pictures and share them with other selected users. The application will support millions of concurrent users. You want to allow developers to focus on just building code without having to create and maintain the underlying infrastructure. Which service should you use to deploy the application?  App Engine  Cloud Endpoints  Compute Engine  Google Kubernetes Engine Reference: https://cloud.google.com/terms/servicesQUESTION 96For this question, refer to the Dress4Win case study.As part of their new application experience, Dress4Wm allows customers to upload images of themselves. The customer has exclusive control over who may view these images. Customers should be able to upload images with minimal latency and also be shown their images quickly on the main application page when they log in. Which configuration should Dress4Win use?  Store image files in a Google Cloud Storage bucket. Use Google Cloud Datastore to maintain metadata that maps each customer’s ID and their image files.  Store image files in a Google Cloud Storage bucket. Add custom metadata to the uploaded images in Cloud Storage that contains the customer’s unique ID.  Use a distributed file system to store customers’ images. As storage needs increase, add more persistent disks and/or nodes. Assign each customer a unique ID, which sets each file’s owner attribute, ensuring privacy of images.  Use a distributed file system to store customers’ images. As storage needs increase, add more persistent disks and/or nodes. Use a Google Cloud SQL database to maintain metadata that maps each customer’s ID to their image files. Topic 1, Dress4Win Company OverviewDress4win is a web-based company that helps their users organize and manage their personal wardrobe using a website and mobile application. The company also cultivates an active social network that connects their users with designers and retailers. They monetize their services through advertising, e- commerce, referrals, and a freemium app model.Company BackgroundDress4win’s application has grown from a few servers in the founder’s garage to several hundred servers and appliances in a colocated data center. However, the capacity of their infrastructure is now insufficient for the application’s rapid growth. Because of this growth and the company’s desire to innovate faster, Dress4win is committing to a full migration to a public cloud.Solution ConceptFor the first phase of their migration to the cloud, Dress4win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and which components they need to change before migrating them.Existing Technical EnvironmentThe Dress4win application is served out of a single data center location.Databases:MySQL – user data, inventory, static dataRedis – metadata, social graph, cachingApplication servers:Tomcat – Java micro-servicesNginx – static contentApache Beam – Batch processingStorage appliances:iSCSI for VM hostsFiber channel SAN – MySQL databasesNAS – image storage, logs, backupsApache Hadoop/Spark servers:Data analysisReal-time trending calculationsMQ servers:MessagingSocial notificationsEventsMiscellaneous servers:Jenkins, monitoring, bastion hosts, security scannersBusiness RequirementsBuild a reliable and reproducible environment with scaled parity of production.Improve security by defining and adhering to a set of security and Identity and Access Management (IAM) best practices for cloud.Improve business agility and speed of innovation through rapid provisioning of new resources.Analyze and optimize architecture for performance in the cloud.Migrate fully to the cloud if all other requirements are met.Technical RequirementsEvaluate and choose an automation framework for provisioning resources in cloud.Support failover of the production environment to cloud during an emergency.Identify production services that can migrate to cloud to save capacity.Use managed services whenever possible.Encrypt data on the wire and at rest.Support multiple VPN connections between the production data center and cloud environment.CEO StatementOur investors are concerned about our ability to scale and contain costs with our current infrastructure.They are also concerned that a new competitor could use a public cloud platform to offset their up-front investment and freeing them to focus on developing better features.CTO StatementWe have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks for new gear to be racked before we can start new projects. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle.CFO StatementOur capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50% lower than our current model.QUESTION 97For this question, refer to the JencoMart case study.JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data. What service account key- management strategy should you recommend?  Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).  Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.  Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs  Deploy a custom authentication service on GCE/Google Container Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs. QUESTION 98Your company uses the Firewall Insights feature in the Google Network Intelligence Center. You have several firewall rules applied to Compute Engine instances. You need to evaluate the efficiency of the applied firewall ruleset. When you bring up the Firewall Insights page in the Google Cloud Console, you notice that there are no log rows to display. What should you do to troubleshoot the issue?  Enable Virtual Private Cloud (VPC) flow logging.  Enable Firewall Rules Logging for the firewall rules you want to monitor.  Verify that your user account is assigned the compute.networkAdmin Identity and Access Management (IAM) role.  Install the Google Cloud SDK, and verify that there are no Firewall logs in the command line output. Reference: https://cloud.google.com/network-intelligence-center/docs/firewall-insights/how-to/using-firewall- insightsQUESTION 99You need to design a solution for global load balancing based on the URL path being requested. You need to ensure operations reliability and end-to-end in-transit encryption based on Google best practices.What should you do?  Create a cross-region load balancer with URL Maps.  Create an HTTPS load balancer with URL maps.  Create appropriate instance groups and instances. Configure SSL proxy load balancing.  Create a global forwarding rule. Configure SSL proxy balancing. Explanation/Reference: https://cloud.google.com/load-balancing/docs/https/url-mapQUESTION 100You are using Cloud Shell and need to install a custom utility for use in a few weeks. Where can you store the file so it is in the default execution path and persists across sessions?  ~/bin  Cloud Storage  /google/scripts  /usr/local/bin QUESTION 101For this question, refer to the TerramEarth case study.TerramEarth’s CTO wants to use the raw data from connected vehicles to help identify approximately when a vehicle in the development team to focus their failure. You want to allow analysts to centrally query the vehicle data. Which architecture should you recommend?  Option A  Option B  Option C  Option D https://cloud.google.com/solutions/iot/https://cloud.google.com/solutions/designing-connected-vehicle-platformhttps://cloud.google.com/solutions/designing-connected-vehicle-platform#data_ingestionhttp://www.eweek.com/big-data-and-analytics/google-touts-value-of-cloud-iot-core-for-analyzing- connected-car-datahttps://cloud.google.com/solutions/iot/QUESTION 102One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.You want to optimize this Dockerfile for faster deployment times without adversely affecting the app’s functionality.Which two actions should you take? Choose 2 answers.  Remove Python after running pip.  Remove dependencies from requirements.txt.  Use a slimmed-down base image like Alpine linux.  Use larger machine types for your Google Container Engine node pools.  Copy the source after the package dependencies (Python and pip) are installed. The speed of deployment can be changed by limiting the size of the uploaded app, limiting the complexity of the build necessary in the Dockerfile, if present, and by ensuring a fast and reliable internet connection.Note: Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.References: https://groups.google.com/forum/#!topic/google-appengine/hZMEkmmObDUhttps://www.alpinelinux.org/about/QUESTION 103You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce high availability. What should you do?  Create a read replica instance in a different region  Create a failover replica instance in a different region  Create a read replica instance in the same region, but in a different zone  Create a failover replica instance in the same region, but in a different zone ExplanationReference https://cloud.google.com/sql/docs/mysql/configure-haQUESTION 104A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center.He wants to migrate the custom tool to the new cloud environment.You want to advocate for the adoption of Google Cloud Deployment Manager.What are two business risks of migrating to Cloud Deployment Manager? Choose 2 answers  Cloud Deployment Manager uses Python.  Cloud Deployment Manager APIs could be deprecated in the future.  Cloud Deployment Manager is unfamiliar to the company’s engineers.  Cloud Deployment Manager requires a Google APIs service account to run.  Cloud Deployment Manager can be used to permanently delete cloud resources.  Cloud Deployment Manager only supports automation of Google Cloud resources. QUESTION 105Your company wants to start using Google Cloud resources but wants to retain their on-premises Active Directory domain controller for identity management. What should you do?  Use the Admin Directory API to authenticate against the Active Directory domain controller.  Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO.  Use Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an identity provider.  Use Compute Engine to create an Active Directory (AD) domain controller that is a replica of the onpremises AD domain controller using Google Cloud Directory Sync. Reference:https://cloud.google.com/blog/products/identity-security/using-your-existing-identity-managementsystem- with-google-cloud-platformQUESTION 106You need to ensure reliability for your application and operations by supporting reliable task scheduling for compute on GCP. Leveraging Google best practices, what should you do?  Using the Cron service provided by App Engine, publishing messages directly to a message-processing utility service running on Compute Engine instances.  Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.  Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a message-processing utility service running on Compute Engine instances.  Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances. QUESTION 107Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process. What should you do?  Create custom Google Stackdriver alerts and send them to the auditor.  Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor.  Use cloud functions to transfer log entries to Google Cloud SQL and use ACLS and views to limit an auditor’s view.  Enable Google Cloud Storage (GCS) log export to audit logs Into a GCS bucket and delegate access to the bucket. QUESTION 108Case Study: 3 – JencoMart Case StudyCompany OverviewJencoMart is a global retailer with over 10,000 stores in 16 countries. The stores carry a range of goods, such as groceries, tires, and jewelry. One of the company’s core values is excellent customer service. In addition, they recently introduced an environmental policy to reduce their carbon output by 50% over the next 5 years.Company BackgroundJencoMart started as a general store in 1931, and has grown into one of the world’s leading brands known for great value and customer service. Over time, the company transitioned from only physical stores to a stores and online hybrid model, with 25% of sales online. Currently, JencoMart has little presence in Asia, but considers that market key for future growth.Solution ConceptJencoMart wants to migrate several critical applications to the cloud but has not completed a technical review to determine their suitability for the cloud and the engineering required for migration. They currently host all of these applications on infrastructure that is at its end of life and is no longer supported.Existing Technical EnvironmentJencoMart hosts all of its applications in 4 data centers: 3 in North American and 1 in Europe, most applications are dual-homed.JencoMart understands the dependencies and resource usage metrics of their on-premises architecture.Application Customer loyalty portalLAMP (Linux, Apache, MySQL and PHP) application served from the two JencoMart-owned U.S.data centers.Database* Oracle Database stores user profiles* PostgreSQL database stores user credentials-homed in US WestAuthenticates all usersCompute* 30 machines in US West Coast, each machine has:* 20 machines in US East Coast, each machine has:-core CPURAID 1)Storage* Access to shared 100 TB SAN in each location* Tape backup every weekBusiness Requirements* Optimize for capacity during peak periods and value during off-peak periods* Guarantee service availably and support* Reduce on-premises footprint and associated financial and environmental impact.* Move to outsourcing model to avoid large upfront costs associated with infrastructure purchase* Expand services into Asia.Technical Requirements* Assess key application for cloud suitability.* Modify application for the cloud.* Move applications to a new infrastructure.* Leverage managed services wherever feasible* Sunset 20% of capacity in existing data centers* Decrease latency in AsiaCEO StatementJencoMart will continue to develop personal relationships with our customers as more people access the web. The future of our retail business is in the global market and the connection between online and in-store experiences. As a large global company, we also have a responsibility to the environment through ‘green’ initiatives and polices.CTO StatementThe challenges of operating data centers prevents focus on key technologies critical to our long- term success. Migrating our data services to a public cloud infrastructure will allow us to focus on big data and machine learning to improve our service customers.CFO StatementSince its founding JencoMart has invested heavily in our data services infrastructure. However, because of changing market trends, we need to outsource our infrastructure to ensure our long- term success. This model will allow us to respond to increasing customer demand during peak and reduce costs.For this question, refer to the JencoMart case study.JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals. Which metrics should you track?  Error rates for requests from Asia  Latency difference between US and Asia  Total visits, error rates, and latency from Asia  Total visits and average latency for users in Asia  The number of character sets present in the database From scenario:Business Requirements include: Expand services into AsiaTechnical Requirements include: Decrease latency in AsiaQUESTION 109To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections.What should you do?  Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the ETL process using data in the bucket  Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in US, EU, and Asia. Run the ETL process using the data in the bucket  Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket  Directly transfer the files to a different Google Cloud Regional Storage bucket location in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket  Loading … Difficulty in writing the Google Professional Cloud Architect Exam Google Professional Cloud Architect Certification is a most privileged achievement one could be graced with. It is one of the highest level of certification in the Google. This Certification consisting of real time scenarios and practical experience which make it difficult for the candidate to get through with the Google Professional Cloud Architect Exam. If the candidates have proper preparation material to pass the Google Professional Cloud Architect exam with good grades. Questions answers and clarifications which are designed in form of ExamsLabs exam dumps make sure to cover entire course content. ExamsLabs have a brilliant Google Professional Cloud Architect exam dumps with the foremost latest and vital queries and answers in PDF format. ExamsLabs is sure about the exactness and legitimacy of Google Professional Cloud Architect exam dumps and in this manner. Candidates can easily pass the Google Professional Cloud Architect exam with genuine Google Professional Cloud Architect exam dumps and get Google Professional Cloud Architect certification skillful surely. These exam dumps are viewed as the best source to understand the Google Professional Cloud Architect Certification well by simply perusing these example questions and answers. if the candidate complete practice the exam with certification Google Professional Cloud Architect exam dumps along with self-assessment to get the proper idea on Google accreditation and to ace the certification exam. Analyzing & Optimizing Business and Technical Processes Analyze and define business processes: this entails stakeholder management (facilitation and influencing); decision-making process; change management; skill readiness and team assessment; cost optimization and resource optimization; customer success management; procedure development to ensure the resilience of solutions with production.Analyze and define technical processes: this area will require skills in testing and validation; software development lifecycle plan; troubleshooting and post mortem analysis culture; continuous deployment and continuous integration; service catalog and provisioning; disaster recovery and business continuity;   Updated Professional-Cloud-Architect Exam Practice Test Questions: https://www.examslabs.com/Google/Google-Cloud-Certified/best-Professional-Cloud-Architect-exam-dumps.html --------------------------------------------------- Images: https://blog.examslabs.com/wp-content/plugins/watu/loading.gif https://blog.examslabs.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2022-07-31 09:27:33 Post date GMT: 2022-07-31 09:27:33 Post modified date: 2022-07-31 09:27:33 Post modified date GMT: 2022-07-31 09:27:33