This page was exported from Exams Labs Braindumps [ http://blog.examslabs.com ] Export date:Sun Nov 24 15:35:26 2024 / +0000 GMT ___________________________________________________ Title: [Q38-Q60] 1Z0-1084-21 Practice Test Give You First Time Success with 100% Money Back Guarantee! --------------------------------------------------- 1Z0-1084-21 Practice Test Give You First Time Success with 100% Money Back Guarantee! All Obstacles During 1Z0-1084-21 Exam Preparation with 1Z0-1084-21 Real Test Questions Oracle 1Z0-1084-21 Exam Syllabus Topics: TopicDetailsTopic 1Develop Serverless Application with Oracle Functions Develop Microservices and Applications for OKETopic 2Overcome Security challenges with Cloud Native Discuss the Meaning of Serverless ComputingTopic 3Explain Microservices vs. Containers vs. Functions Use OCI APIs, SDKs and CLIupgradeTopic 4Developing Cloud Native Applications Manage multiple environments (dev, teststage, prod)Topic 5Configure and Use Secret Management Operating Cloud Native Applications Cloud Native FundamentalsTopic 6Build, Deploy and Release Applications Testing Cloud Native ApplicationsTopic 7Perform Tasks around Monitoring, Observability, and Alerting Explain Distributed ComputingTopic 8Securing Cloud Native Applications Use the Defense-in-depth approach   Q38. A developer using Oracle Cloud Infrastructure (OCI) API Gateway must authenticate the API requests to their web application. The authentication process must be implementedusing a custom scheme which accepts string parameters from the API caller. Which method can the developer use In this scenario?  Create an authorizer function using request header authorization.  Create an authorizer function using token-based authorization.  Create a cross account functions authorizer.  Create an authorizer function using OCI Identity and Access Management based authentication ExplanationUsing Authorizer Functions to Add Authentication and Authorization to API Deployments:You can control access to APIs you deploy to API gateways using an ‘authorizer function’ (as described in this topic), or using JWTs (as described in Using JSON Web Tokens (JWTs) to Add Authentication and Authorization to API Deployments).You can add authentication and authorization functionality to API gateways by writing an ‘authorizer function’ that:1. Processes request attributes to verify the identity of a caller with an identity provider.2.Determines the operations that the caller is allowed to perform.3.Returns the operations the caller is allowed to perform asa list of ‘access scopes’ (an ‘access scope’ is an arbitrary string used to determine access).Optionally returns a key-value pair for use by the API deployment. For example, as a context variable for use in an HTTP back end definition (see Adding Context Variables to Policies and HTTP Back End Definitions).Create an authorizerfunction using request header authorization implemented using a custom scheme which accepts string parameters from the API caller.Managing Input ParametersIn our case we will need to manage quite a few static parameters in our code. For example the URLs of the secrets service endpoints, the username and other constant parameterised data. We can manage these either at Application or Function level (an OCI Function is packaged in an Application which can contain multiple Functions). In this case I will create function level parameters. You can use the following command to create the parameters:fn config function test idcs-assert idcsClientId aedc15531bc8xxxxxxxxxxbd8a193References:https://technology.amis.nl/2020/01/03/oracle-cloud-api-gateway-using-an-authorizer-function-for-client-secret-ahttps://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewayusingauthorizerfunction.htmhttps://www.ateam-oracle.com/how-to-implement-an-oci-api-gateway-authorization-fn-in-nodejs-that-accesses-oQ39. You are deploying an API via Oracle Cloud Infrastructure (OCI) API Gateway and you want to implement request policies to control access Which is NOT available in OCI API Gateway?  Limiting the number of requests sent to backend services  Enabling CORS (Cross-Origin Resource Sharing) support  Providing authentication and authorization  Controlling access to OCI resources ExplanationAdding Request Policies and Response Policies to API Deployment Specifications:You can control thebehavior of an API deployment you create on an API gateway by adding request and response policies to the API deployment specification:a request policy describes actions to be performed on an incoming request from a caller before it is sent to a back end a response policy describes actions to be performed on a response returned from a back end before it is sent to a caller You can use request policies to:limit the number of requests sent to back-end servicesenable CORS (Cross-Origin Resource Sharing) supportprovide authentication and authorizationYou can add request and response policies that apply globally to all routes in an API deployment specification, and also (in some cases) request and response policies that apply only to particular routes.Notethe following:No response policies are currently available.API Gateway request policies and response policies are different to IAM policies, which control access to Oracle Cloud Infrastructure resources.You can add request and response policies to anAPI deployment specification by:using the Consoleediting a JSON fileReferences:https://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewayaddingrequestpolicies.htmQ40. Which two are characteristics of microservices?  Microservices are hard to test in isolation.  Microservicescan be independently deployed.  All microservices share a data store.  Microservices can be implemented in limited number of programming languages.  Microservices communicate over lightweight APIs. ExplanationLearn About the Microservices ArchitectureIf you want to design an application that is multilanguage, easily scalable, easy to maintain and deploy, highly available, and that minimizes failures, then use the microservices architecture to design and deploy a cloud application.In a microservicesarchitecture, each microservice owns a simple task, and communicates with the clients or with other microservices by using lightweight communication mechanisms such as REST API requests.The following diagram shows the architecture of an application thatconsists of multiple microservices.Microservices enable you to design your application as a collection of loosely coupled services. Microservices follow the share-nothing model, and run as stateless processes. This approach makes it easier to scale andmaintain the application.The API layer is the entry point for all the client requests to a microservice. The API layer also enables the microservices to communicate with each other over HTTP, gRPC, and TCP/UDP.The logic layer focuses on a single business task, minimizing the dependencies on the other microservices.This layer can be written in a different language for each microservice.The data store layer provides a persistence mechanism, such as a database storage engine, log files, and so on.Consider using a separate persistent data store for each microservice.Typically, each microservice runs in a container that provides a lightweight runtime environment.Loosely coupled with other services – enables a team to work independently the majority of time on their service(s) without being impacted by changes to other services and without affecting other servicesReferences:https://docs.oracle.com/en/solutions/learn-architect-microservice/index.htmlhttps://microservices.io/patterns/microservices.htmlhttps://www.techjini.com/blog/microservices/Q41. You are developing a distributed application and you need a call to a path to always return a specific JSON content deploy an Oracle Cloud Infrastructure API Gateway with the below API deployment specification.What is the correct value for type?  STOCK_RESPONSE_BACKEND  CONSTANT_BACKEND  JSON_BACKEND  HTTP_BACKEND ExplanationAdding Stock Responses as an API Gateway Back End:You’ll often want to verify that an API has been successfully deployed on an API gateway without having to set up an actual back-end service. One approach is to define a route in the API deployment specification that has a path to a ‘dummy’ back end. Onreceiving a request to that path, the API gateway itself acts as the back end and returns a stock response you’ve specified.Equally, there are some situations in a production deployment where you’ll want a particular path for a route to consistentlyreturn the same stock response without sending a request to a back end. For example, when you want a call to a path to always return a specific HTTP status code in the response.Using the API Gateway service, you can define a path to a stock response backend that always returns the same:HTTP status codeHTTP header fields (name-value pairs)content in the body of the response“type”: “STOCK_RESPONSE_BACKEND” indicates that the API gateway itself will act as the back end and return the stock response you define (the status code, the header fields and the body content).References:https://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewayaddingstockresponses.htmQ42. In order to effectively test your cloud-native applications, you might utilize separate environments (development, testing, staging, production, etc.). Which Oracle Cloud Infrastructure (OC1) service can you use to create and manage your infrastructure?  OCI Compute  OCI Container Engine for Kubernetes  OCI Resource Manager  OCI API Gateway ExplanationResource Manager is an Oracle Cloud Infrastructure service that allows you to automate the process of provisioning your Oracle Cloud Infrastructure resources. Using Terraform, Resource Manager helps you install, configure, and manage resources through the “infrastructure-as-code” model.References:https://docs.cloud.oracle.com/iaas/Content/ResourceManager/Concepts/resourcemanager.htmQ43. Who is responsiblefor patching, upgrading and maintaining the worker nodes in Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE)?  It Is automated  Independent Software Vendors  Oracle Support  The user ExplanationAfter a new version of Kubernetes has been released and when Container Engine for Kubernetes supports the new version, you can use Container Engine for Kubernetes to upgrade master nodes running older versions of Kubernetes. Because Container Engine for Kubernetes distributes the KubernetesControl Plane on multiple Oracle-managed master nodes (distributed across different availability domains in a region where supported) to ensure high availability, you’re able to upgrade the Kubernetes version running on master nodes with zero downtime.Having upgraded master nodes to a new version of Kubernetes, you can subsequently create new node pools running the newer version. Alternatively, you can continue to create new node pools that will run older versions of Kubernetes (providing those older versions are compatible with the Kubernetes version running on the master nodes).Note that you upgrade master nodes by performing an ‘in-place’ upgrade, but you upgrade worker nodes by performing an ‘out-of-place’ upgrade. To upgrade the version of Kubernetesrunning on worker nodes in a node pool, you replace the original node pool with a new node pool that has new worker nodes running the appropriate Kubernetes version. Having ‘drained’ existing worker nodes in the original node pool to prevent new pods starting and to delete existing pods, you can then delete the original node pool.Upgrading the Kubernetes Version on Worker Nodes in a Cluster:After a new version of Kubernetes has been released and when Container Engine for Kubernetes supports the new version, you can use Container Engine for Kubernetes to upgrade master nodes running older versions of Kubernetes. Because Container Engine for Kubernetes distributes the Kubernetes Control Plane on multiple Oracle-managed master nodes (distributed across different availability domains in a region where supported) to ensure high availability, you’re able to upgrade the Kubernetes version running on master nodes with zero downtime.You can upgrade the version of Kubernetes running on the worker nodes in a clusterin two ways:(A) Perform an ‘in-place’ upgrade of a node pool in the cluster, by specifying a more recent Kubernetes version for new worker nodes starting in the existing node pool. First, you modify the existing node pool’s properties to specify the morerecent Kubernetes version. Then, you ‘drain’ existing worker nodes in the node pool to prevent new pods starting, and to delete existing pods. Finally, you terminate each of the worker nodes in turn.When new worker nodes are started in the existing node pool, they run the more recent Kubernetes version you specified. See Performing an In-Place Worker Node Upgrade by Updating anExisting Node Pool.(B) Perform an ‘out-of-place’ upgrade of a node pool in the cluster, by replacing the original node pool with a new node pool. First, you create a new node pool with a more recent Kubernetes version. Then, you ‘drain’ existing worker nodes in the original node pool to prevent new pods starting, and to delete existing pods.Finally, you delete the original node pool. When new worker nodes are started in the new node pool, they run the more recent Kubernetes version you specified. See Performing an Out-of-Place Worker Node Upgrade by Replacing an Existing Node Pool with a New Node Pool.Note that in both cases:The more recent Kubernetes version you specify for the worker nodes in the node pool must be compatible with the Kubernetes version running on the master nodes in the cluster. See Upgrading Clusters to Newer Kubernetes Versions).You must drain existing worker nodes in the original node pool. If you don’t drain the worker nodes, workloads running onthe cluster are subject to disruption.References:https://docs.cloud.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengupgradingk8sworkernode.htmQ44. Which Oracle Cloud Infrastructure (OCI) load balancer shape is used by default in OCI container Engineer for Kubernetes?  400 Mbps  8000 Mbps  There is no default. The shape has to be specified.  100 Mbps ExplanationSpecifying Alternative Load Balancer ShapesThe shape of an Oracle Cloud Infrastructure load balancer specifies its maximum total bandwidth (that is, ingress plus egress). By default, load balancers are created with a shape of 100Mbps. Other shapes are available, including 400Mbps and 8000Mbps.SHAPEA template that determines the load balancer’s total pre-provisioned maximum capacity (bandwidth) for ingress plus egress traffic. Available shapes include 10Mbps, 100 Mbps, 400 Mbps, and 8000 Mbps.References:https://docs.cloud.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingloadbalancer.htmhttps://docs.cloud.oracle.com/en-us/iaas/Content/Balance/Concepts/balanceoverview.htmQ45. What is the open source engine for Oracle Functions?  Apache OpenWhisk  OpenFaaS  Fn Project  Knative Explanationhttps://www.oracle.com/webfolder/technetwork/tutorials/FAQs/oci/Functions-FAQ.pdf Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when you want to focus onwriting code to meet business needs.Q46. Which one of the following is NOT a valid backend-type supported by Oracle Cloud Infrastructure (OCI) API Gateway?  STOCK_RESPONSE_BACKEND  ORACLE_FUNCTIONS_BACKEND  ORACLE_STREAMS_BACKEND  HTTP_BACKEND ExplanationIn the API Gateway service, a back end is the means by which a gateway routes requests to the back-end services that implement APIs. If you add a private endpoint back end to an API gateway, you give the API gateway access to the VCN associated with that private endpoint.You can also grant an API gateway access to other Oracle Cloud Infrastructure services as back ends. For example, you could grant an API gateway access to Oracle Functions, so you can create and deploy an API that is backed by a serverless function.API Gateway service to create an API gateway, you can create an API deployment to access HTTP and HTTPS URLs.https://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewayusinghttpbackend.htm API Gateway service to create an API gateway,you can create an API deployment that invokes serverless functions defined in Oracle Functions.https://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewayusingfunctionsbackend.htm API Gateway service, you can define a path to a stock response back endhttps://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewayaddingstockresponses.htmQ47. As a cloud-native developer, you have written a web service for your company. You have used Oracle Cloud Infrastructure (OCI) API Gateway service to expose the HTTP backend. However, your security team has suggested that your web service should handle Distributed Denial-of-Service (DDoS) attack. You are time-constrained and you need to make sure that this is implemented as soon as possible.What should you do in this scenario?  Use OCI virtual cloud network (VCN) segregation to control DDoS.  Use a third party service integration to implement a DDoS attack mitigation,  Use OCI API Gateway service and configure rate limiting.  Re-write your web service and implement rate limiting. ExplanationHaving created an API gateway and deployed one or more APIs on it, you’ll typically want to limit the rate at which front-end clients can make requests to back-end services. For example, to:– maintain high availability and fair use of resources by protecting back ends from being overwhelmed by too many requests– prevent denial-of-service attacks– constrain costs of resource consumption– restrict usage of APIs by your customers’users in order to monetize APIs You apply a rate limit globally to all routes in an API deployment specification.If a request is denied because the rate limit has been exceeded, the response header specifies when the request can be retried.You can add arate-limiting request policy to an API deployment specification by:using the Consoleediting a JSON file{“requestPolicies”: {“rateLimiting”: {“rateKey”: “CLIENT_IP”,“rateInRequestsPerSecond”: 10}},“routes”:[{“path”: “/hello”,“methods”: [“GET”],“backend”: {“type”: “ORACLE_FUNCTIONS_BACKEND”,“functionId”: “ocid1.fnfunc.oc1.phx.aaaaaaaaab______xmq”}}]}https://docs.cloud.oracle.com/en-us/iaas/Content/APIGateway/Tasks/apigatewaylimitingbackendaccess.htmQ48. Which concepthe following steps reference Console instructionsCloud Infrastructure Resource Manager?  Job  Stack  Queue  Plan Explanationhttps://docs.cloud.oracle.com/en-us/iaas/Content/ResourceManager/Concepts/resourcemanager.htm Following are brief descriptions of key concepts and the main components of Resource Manager.CONFIGURATIONInformation to codify your infrastructure. A Terraform configuration can be either a solution or a file that you write and upload.JOBInstructions to perform the actions defined in your configuration. Only one job at a time can run on a given stack; further, you can have only one set of Oracle Cloud Infrastructure resources on a given stack. To provision a different set of resources, you must create a separate stack and use a different configuration.Resource Manager provides the following job types:Plan: Parses your Terraform configuration and creates an execution plan for the associated stack. The execution plan lists the sequence of specific actions planned to provision your Oracle Cloud Infrastructure resources. The execution plan is handed off to the apply job, which then executes the instructions.Apply. Applies the execution plan to theassociated stack to create (or modify) your Oracle Cloud Infrastructure resources. Depending on the number and type of resources specified, a given apply job can take some time. You can check status while the job runs.Destroy. Releases resources associated with a stack. Released resources are not deleted. For example, terminates a Compute instance controlled by a stack. The stack’s job history and state remain after running a destroy job. You can monitor the status and review the results of a destroy job by inspecting the stack’s log files.Import State. Sets the provided Terraform state file as the current state of the stack. Use this job to migrate local Terraform environments to Resource Manager.STACKThe collection of Oracle Cloud Infrastructure resources corresponding to a given Terraform configuration.Each stack resides in the compartment you specify, in a single region;however, resources on a given stack can be deployed across multiple regions. An OCID is assigned to each stack.the following steps reference Console instructionsCreate a Terraform configuration.Create a stack.Run a plan job, which produces an executionplan.Review the execution plan.If changes are needed in the execution plan, update the configuration and run a plan job again.Run an apply job to provision resources.Review state file and log files, as needed.You can optionally reapply your configuration, with or without making changes, by running an apply job again.Optionally, to release the resources running on a stack, run a destroy job.Q49. You are building a container image and pushing it to the Oracle Cloud Infrastructure Registry (OCIR). You need to make sure that these get deleted from the repository.Which action should you take?  Create a group and assign a policy to perform lifecycle operations on images.  Set global policy of image retention to “Retain All Images”.  In your compartment, write a policy to limit accessto the specific repository.  Edit the tenancy global retention policy. ExplanationDeleting an ImageWhen you no longer need an old image or you simply want to clean up the list of image tags in a repository, you can delete images from Oracle Cloud Infrastructure Registry.Your permissions control the images in Oracle Cloud Infrastructure Registry that you can delete. You can delete images from repositories you’ve created, and from repositories that the groups to which you belong have been granted accessby identity policies. If you belong to the Administrators group, you can delete images from any repository in the tenancy.Note that as well deleting individual images , you can set up image retention policies to delete images automatically based on selection criteria you specify (see Retaining and Deleting Images Using Retention Policies).Note:In each region in a tenancy, there’s a global image retention policy. The global image retention policy’s default selection criteria retain all images so that no images are automaticallydeleted. However, you can change the global image retention policy so that images are deleted if they meet the criteria you specify. A region’s global image retention policy applies to all repositories in the region, unless it is explicitly overridden byone or more custom image retention policies.You can set up custom image retention policies to override the global image retention policy with different criteria for specific repositories in a region. Having created a custom image retention policy, you apply the custom retention policy to a repository by adding the repository to the policy. The global image retention policy no longer applies to repositories that you add to a custom retention policy.https://docs.cloud.oracle.com/en-us/iaas/Content/Registry/Tasks/registrymanagingimageretention.htm#:~:text=InQ50. You have created a repository in Oracle Cloud Infrastructure Registry in the us-ashburn-1 (iad) region in your tenancy with a namespace called “heyci.Which three are valid tags for an image named “myapp”?  iad.ocir.io/heyoci/myproject/myapp:0.0.1  us-ashburn-l.ocirJo/heyoci/myapp:0.0.2-beta  us-ashburn-l.ocir.io/heyoci/myproject/myapp:0.0.2-beta  us-ashburn-l.ocir.io/myproject/heyoci/myapp:latest  iad.ocir.io/myproject/heyoci/myapprlatest  iad.ocir.io/heyoci/myapp:0.0.2-beta  iad.ocir.io/heyoci/myapp:latest ExplanationGive a tag to the image that you’re going to push to Oracle Cloud Infrastructure Registry by entering:docker tag <image-identifier> <target-tag>where:<image-identifier> uniquely identifies the image, either using the image’s id (for example, 8e0506e14874), or the image’s name and tag separated by a colon (for example, acme-web-app:latest).<target-tag> is in theformat <region-key>.ocir.io/<tenancy-namespace>/<repo-name>/<image-name>:<tag> where:<region-key> is the key for the Oracle Cloud Infrastructure Registry region you’re using. For example, iad.See Availability by Region.ocir.io is the Oracle Cloud Infrastructure Registry name.<tenancy-namespace> is the auto-generated Object Storage namespace string of the tenancy that owns the repository to which you want to push the image (as shown on the Tenancy Information page). For example, the namespace of the acme-dev tenancy might be ansh81vru1zp. Note that for some older tenancies, the namespace string might be the same as the tenancy name in all lower-case letters (for example, acme-dev).Note also that your user must have access to the tenancy.<repo-name> (if specified) is the name of a repository to which you want to push the image (for example, project01). Note that specifying a repository is optional (see About Repositories).<image-name> is the name you want to give the image in Oracle Cloud Infrastructure Registry (for example, acme-web-app).<tag> is an image tag you want to give the image in Oracle Cloud Infrastructure Registry (for example, version2.0.test).For example, for convenience you might want to group together multiple versions of the acme-web-app image in the acme-dev tenancy in the Ashburn region into a repository called project01. You do this by including the name of the repository in the image name when you push the image, in the format <region-key>.ocir.io/<tenancy-namespace>/<repo-name>/<image-name>:<tag>. For example, iad.ocir.io/ansh81vru1zp/project01/acme-web-app:4.6.3. Subsequently, when you use the docker push command, the presence of the repository in the image’s name ensures the image is pushed to the intended repository.If you push an image and include the nameof a repository that doesn’t already exist, a new private repository is created automatically. For example, if you enter a command like docker push iad.ocir.io/ansh81vru1zp/project02/acme-web-app:7.5.2 and the project02 repository doesn’t exist, a privaterepository called project02 is created automatically.If you push an image and don’t include a repository name, the image’s name is used as the name of the repository. For example, if you enter a command like docker pushiad.ocir.io/ansh81vru1zp/acme-web-app:7.5.2 that doesn’t contain a repository name, the image’s name (acme-web-app) is used as the name of a private repository.https://docs.cloud.oracle.com/en-us/iaas/Content/Registry/Concepts/registrywhatisarepository.htmQ51. A programmerIs developing a Node is application which will run in a Linux server on their on-premises data center. This application will access various Oracle Cloud Infrastructure (OC1) services using OCI SDKs.What is the secure way to access OCI services with OCI Identity and Access Management (JAM)?  Create a new OCI IAM user associated with a dynamic group and a policy that grants the desired permissions to OCI services. Add the on-premises Linux server in the dynamic group.  Create an OCI IAM policy with theappropriate permissions to access the required OCI services and assign the policy to the on-premises Linux server.  Create a new OCI IAM user, add the user to a group associated with a policy that grants the desired permissions to OCI services. In the on-premises Linux server, generate the keypair used for signing API requests and upload the public key to the IAM user.  Create a new OCI IAM user, add the user to a group associated with a policy that grants the desired permissions to OCI services. In theon-premises Linux server, add the user name and password to a file used by Node.js authentication. ExplanationSet up an Oracle Cloud Infrastructure API Signing Key for Use with Oracle Functions:Before using Oracle Functions, you have to set up an OracleCloud Infrastructure API signing key.The instructions in this topic assume:– you are using LinuxFor more information and other options, see Required Keys and OCIDs.The instructions below describe how to create a new ~/.oci directory, how to generate a new private key file and public key file in that ~/.oci directory, how to upload the public key to Oracle Cloud Infrastructure to create a new API signing key, and how to obtain a fingerprint for the public API key. Be aware that instructions and examples elsewhere in this documentation assume the ~/.oci directory exists and contains the private and public key files.To set up an API signing key:Log in to your development environment as a functions developer.In a terminal window, confirm that the ~/.oci directory does not already exist. For example, by entering:ls ~/.ociAssuming the ~/.oci directory does not already exist, create it.For example, by entering:mkdir ~/.ociGenerate a private key encrypted with a passphrase that you provide by entering:$ openssl genrsa -out ~/.oci/<private-key-file-name>.pem -aes128 2048where <private-key-file-name> is a name of your choice for theprivate key file (for example, john_api_key_private.pem).For example:$ openssl genrsa -out ~/.oci/john_api_key_private.pem -aes128 2048Generating RSA private key, 2048 bit long modulus++++++e is 65537 (0x10001)Enter pass phrase for /Users/johndoe/.oci/john_api_key_private.pem:References:https://docs.cloud.oracle.com/en-us/iaas/Content/Functions/Tasks/functionssetupapikey.htmQ52. Which two are benefits of distributed systems?  Privacy  Security  Ease of testing  Scalability  Resiliency Explanationdistributed systems of native-cloud like functions that have a lot of benefit like Resiliency and availability Resiliency andavailability refers to the ability of a system to continue operating, despite the failure or sub-optimal performance of some of its components.In the case of Oracle Functions:The control plane is a set of components that manages function definitions.Thedata plane is a set of components that executes functions in response to invocation requests.For resiliency and high availability, both the control plane and data plane components are distributed across different availability domains and fault domains ina region. If one of the domains ceases to be available, the components in the remaining domains take over to ensure that function definition management and execution are not disrupted.When functions are invoked, they run in the subnets specified for theapplication to which the functions belong.For resiliency and high availability, best practice is to specify a regional subnet for an application (or alternatively, multiple AD-specific subnets in different availability domains). If an availability domainspecified for an application ceases to be available, Oracle Functions runs functions in an alternative availability domain.Concurrency and ScalabilityConcurrency refers to the ability of a system to run multiple operations in parallel using shared resources.Scalability refers to the ability of the system to scale capacity (both up and down) to meet demand.In the case of Functions, when a function is invoked for the first time, the function’s image is run as a container on an instance in a subnetassociated with the application to which the function belongs. When the function is executing inside the container, the function can read from and write to other shared resources and services running in the same subnet (for example, Database as a Service).The function can also read from and write to other shared resources (for example, Object Storage), and other Oracle Cloud Services.If Oracle Functions receives multiple calls to a function that is currently executing inside a running container, Oracle Functions automatically and seamlessly scales horizontally to serve all the incoming requests. Oracle Functions starts multiple Docker containers, up to the limit specified for your tenancy. The default limit is 30 GB of RAM reserved for function execution per availability domain, although you can request an increase to this limit. Provided the limit is not exceeded, there is no difference in response time (latency) between functions executing on the different containers.Q53. Which is NOT a supported SDK on Oracle Cloud Infrastructure (OCI)?  Ruby SDK  Java SDK  Python SDK  Go SDK  .NET SDK Explanationhttps://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/sdks.htmQ54. You have deployed a Python application on Oracle Cloud Infrastructure Container Engine for Kubernetes.However, during testing you found a bug that you rectified and created a new Docker image. You need to make sure that if this new Image doesn’t work then you can roll back to the previous version.Using kubectl, which deployment strategies should you choose?  Rolling Update  Canary Deployment  Blue/Green Deployment  A/B Testing ExplanationUsing Blue-Green Deployment to Reduce Downtime and Risk:>Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green. At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live and Green is idle.This technique can eliminate downtime due to app deployment. In addition, blue-green deployment reduces risk: if something unexpected happens with your new version on Green, you can immediately roll back to the last version by switching back to Blue.>Canary deployments are a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers.The canary deployment serves as an early warning indicator with less impact on downtime: if the canary deployment fails, the rest of the servers aren’t impacted.>A/B testing is a way to compare two versions of a singlevariable, typically by testing a subject’s response to variant A against variant B, and determining which of the two variants is more effective>Rolling update offers a way to deploy the new version of your application gradually across your cluster.References:https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.htmlQ55. A leading insurancefirm is hosting its customer portal in Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes with an OCI Autonomous Database. Their support team discovered a lot of SQL injection attempts and cross-site scripting attacks to the portal, which isstarting to affect the production environment.What should they implement to mitigate this attack?  Network Security Lists  Network Security Groups  Network Security Firewall  Web Application Firewall ExplanationWeb Application Firewall (WAF):Oracle Cloud Infrastructure Web Application Firewall (WAF) is a cloud-based, Payment Card Industry (PCI) compliant, global security service that protects applications from malicious and unwanted internet traffic.WAF can protect any internet facing endpoint,providing consistent rule enforcement across a customer’s applications.WAF provides you with the ability to create and manage rules for internet threats including Cross-Site Scripting (XSS), SQL Injection and other OWASP-defined vulnerabilities. Unwantedbots can be mitigated while tactically allowed desirable bots to enter. Access rules can limit based on geography or the signature of the request.The global Security Operations Center (SOC) will continually monitor the internet threat landscape acting asan extension of your IT infrastructure.References:https://docs.cloud.oracle.com/en-us/iaas/Content/WAF/Concepts/overview.htmQ56. Which two statements are true for serverless computing and serverless architectures?  Long running tasks are perfectly suited for serverless  Serverless function state should never be stored externally  Application DevOps team is responsible for scaling  Serverless function execution is fully managed by a third party  Applications running on a FaaS (Functions as a Service) platform ExplanationOracle Functions is a fully managed,multi-tenant, highly scalable, on-demand, Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when youwant to focus on writing code to meet business needs.The serverless and elastic architecture of Oracle Functions means there’s no infrastructure administration or software administration for you to perform. You don’t provision or maintain compute instances, and operating system software patches and upgrades are applied automatically. Oracle Functions simply ensures your app is highly-available, scalable, secure, and monitored Applications built with a serverless infrastructure will scale automatically asthe user base grows or usage increases. If a function needs to be run in multiple instances, the vendor’s servers will start up, run, and end them as they are needed.Oracle Functions is based on Fn Project. Fn Project is an open source, container native,serverless platform that can be run anywhere – any cloud or on-premises.Serverless architectures are not built for long-running processes. This limits the kinds of applications that can cost-effectively run in a serverless architecture. Because serverlessproviders charge for the amount of time code is running, it may cost more to run an application with long-running processes in a serverless infrastructure compared to a traditional one.https://docs.cloud.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsconcepts.htmhttps://www.cloudflare.com/learning/serverless/why-use-serverless/Q57. You have written a Node.js function and deployed it to Oracle Functions. Next, you need to call this function from a microservicewritten in Java deployed on Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes (OKE).Which can help you to achieve this?  Use the OCI CLI with kubect1 to invoke the function from the microservice.  Oracle Functions does not allow a microservice deployed on OKE to invoke a function.  OKE does not allow a microservice to invoke a function from Oracle Functions.  Use the OCI Java SDK to invoke the function from the microservice. ExplanationInvoking FunctionsYou can invoke a functionthat you’ve deployed to Oracle Functions in different ways:1. Using the Fn Project CLI.2. Using the Oracle Cloud Infrastructure CLI.3. Using the Oracle Cloud Infrastructure SDKs.4. Making a signed HTTP request to the function’s invoke endpoint. Every function has an invoke endpoint.Using the Fn Project CLI to Invoke FunctionsTo invoke a function deployed to Oracle Functions using the Fn Project CLI:Log in to your development environment as a functions developer.In a terminal window, enter:$ fn invoke <app-name> <function-name>Using SDKs to Invoke Functions:If you’re writing a program to invoke a function in a language for which an Oracle Cloud Infrastructure SDK exists, Oracle recommends you use that SDK to send API requests to invoke thefunction. Among other things, the SDK will facilitate Oracle Cloud Infrastructure authentication.References:https://docs.cloud.oracle.com/en-us/iaas/Content/Functions/Tasks/functionsinvokingfunctions.htmQ58. You are building a cloud native, serverless travel application with multiple Oracle Functions in Java, Python and Node.js. You need to build and deploy these functions to a single applications named travel-app.Which command will help you complete this task successfully?  oci fn function deploy –ap travel-ap –all  fn -v deploy –ap travel-ap — all  oci fn application –application-name-ap deploy –all  fn function deploy –all –application-name travel-ap ExplanationTo get started with Oracle Functions:Creating, Deploying, and Invoking a Helloworld FunctionStep 6- Changedirectory to the newly created helloworld-func directory.Step 7- Enter the following single Fn Project command to build the function and its dependencies as a Docker image called helloworld-func, push the image to the specified Docker registry, and deploythe function to Oracle Functions in the helloworld-app:$ fn -v deploy –app helloworld-appThe -v option simply shows more detail about what Fn Project commands are doing (see Using the Fn Project CLI with Oracle Functions).References:https://docs.cloud.oracle.com/en-us/iaas/Content/Functions/Tasks/functionscreatingfirst.htmQ59. In a Linux environment, what is the default locations of the configuration file that Oracle Cloud Infrashtructure CLI uses for profile information/  /etc/.oci/config  /usr/local/bin/config  SHOME/.oci/config  /usr/bin/oci/config ExplanationBy default, the Oracle Cloud Infrastructure CLI configuration file is located at ~/.oci/config.You might already have a configuration file as a result of installing the Oracle Cloud Infrastructure CLI.Q60. Which pattern can help you minimize the probability of cascading failures in your system during partial loss of connectivity or a complete service failure?  Retry pattern  Anti-corruption layer pattern  Circuit breaker pattern  Compensating transaction pattern ExplanationA cascading failure is a failure that grows over time as a result of positive feedback. It can occur when a portion of an overall system fails, increasing theprobability that other portions of the system fail.the circuit breaker pattern prevents the service from performing an operation that is likely to fail. For example, a client service can use a circuit breaker to prevent further remote calls over the network when a downstream service is not functioning properly. This can also prevent the network from becoming congested by a sudden spike in failed retries by one service to another, and it can also prevent cascading failures. Self-healing circuit breakers check the downstream service at regular intervals and reset the circuit breaker when the downstream service starts functioning properly.https://blogs.oracle.com/developers/getting-started-with-microservices-part-three Loading … Fully Updated Free Actual Oracle 1Z0-1084-21 Exam Questions: https://www.examslabs.com/Oracle/Oracle-Cloud-Infrastructure/best-1Z0-1084-21-exam-dumps.html --------------------------------------------------- Images: https://blog.examslabs.com/wp-content/plugins/watu/loading.gif https://blog.examslabs.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2022-09-05 09:59:03 Post date GMT: 2022-09-05 09:59:03 Post modified date: 2022-09-05 09:59:03 Post modified date GMT: 2022-09-05 09:59:03