Particular website or application may only need one small portion of a single server, and therefore dedicated server is not required. The computers are set up to work together and it seems as if the applications were running on one particular machine. At the very beginning, one computer could only do one particular task at a time. This leads to the concept of centralized “Service Bus” which interconnects all different systems via a hub type architecture. Even if they trust their storage provider, they still have security worries. Employees can sync their documents from various places at the same time and follow their co employees and receive updates in real time. That is exactly what is coming with the serverless architecture where your server will be managed by a 3rd party cloud provider like Amazon (Lambda), Microsoft (Azure Functions), or Google (Cloud Functions). Source: https://redmondmag.com/articles/2017/08/01/container-orchestration-with-kubernetes.aspx. Ensuring data portability is essential in cloud computing. It means, using both private and public clouds, subjected to their purpose. In the enterprise, the languages were analogous to messaging protocols and messaging formats which different systems used for their communication. Web hosting provides a fixed server or a portion of a single server, where cloud computing offers the benefit of many servers all working together as one. Once the number of services or systems increased, the point to point connection of these services was no longer scalable and maintainable. The services varies from business software to documents that is accessed via the web to off-site storage or computing resources. The service provided should be fast and without interruption. SUMMARY. But running them parallel was not enough for building a truly distributed system since it requires a mechanism to communicate between different computers (or programs running on these computers). As long as an electronic device has web access the information can be accessed. Distributed systems (to be exact, distributed computer systems) has come a long way from where it was started. Distributed file system is the new evolved version of file system which is capable of handling information distributed across many clusters. Even though this was a good enough idea, it was not the best option when it comes to resource utilization of the host computer. There is no surprise that the prominent technology on this domain came from Google given their scale. With Windows, Unix, and Linux operating systems, it was possible to run multiple tasks on the same computer. Generally companies work on strict data security guidelines to prevent hacking and invest severely on upgraded software and hardware. If we… But still, with the containers and orchestration frameworks, there should be a team who manages these servers. Are collaborative and allow multiple users to share documents at same time. The common example of SaaS is Salesforce.com, which offers a customer relationship management (CRM) system accessible via the Internet. In this resources are retrieved on internet through web applications rather than from a direct server. Cloud computing is dynamically scalable since users only have to use the amount of online computing resources they actually want. − No way to connect them. The best type of web hosting is dedicated hosting. There is no surprise that the prominent technology on this domain came from Google given their scale. Using the cloud results in at least 30% less energy consumption and carbon emissions than using on-site servers, Nevertheless, large companies are cautious of cloud computing. Larger companies go for Private cloud; smaller companies may use public cloud for hosting. They have physical control over infrastructure if the cloud is set up on premise of the organization. Service interfaces were properly defined through a WSDL (for SOAP) or WADL (for REST) and the service consumers used those interfaces for their client-side implementations. Due to the simplicity of the REST model, the features like security (authentication and authorization), caching, throttling and monitoring type capabilities were needed to implement on top of the standard REST API implementations. One advantage of cloud computing is that when a company’s information is stored in "the cloud" overall expenses are drastically reduced. Software can be easily updated by the customers and it is done by the cloud service providers along with maintenance. Major advantage of cloud computing is the flexibility. This concept was available with the Linux operating system for some time, it became more popular and improved a lot with the introduction of container-based application deployment. But still, with the containers and orchestration frameworks, there should be a team who manages these servers. With the reduction of price for computing power and storage, organizations all over the world started using distributed systems and SOA-based enterprise IT systems. Instead of implementing these capabilities at each and every API separately, there came the requirement to have a common component to apply these features on top of the API. These types of requirements made the technology focus shift towards the place where it all began. There were few other mechanisms like file sharing; database sharing also came into the picture. With the reduction of price for computing power and storage, organizations all over the world started using distributed systems and SOA based enterprise IT systems. Kubernetes and docker made life easier for application programmers. Concerns regarding data portability can seriously threaten smooth transition to cloud. If we need multiple tasks to be done in parallel, we need to have multiple computers running in parallel. This leads to the idea of virtual machines where same computer can act as multiple computers and run them all in parallel. They built the container orchestration platform called “Kubernetes” (a.k.a k8s) and it became the de-facto standard for large scale container orchestration requirements. since the container image he builds will run almost identical in all the environments given that all dependencies are packaged into it. As long as the employees have internet access they can work from anywhere around the globe and this flexibility provides upper hand for the cloud computing. MscComp. Running multiple operating systems required additional resources which was not required when running in the same operating system. In addition to that, users are not able to access the servers that have their stored data. Hadoop distributed file system (HDFS) is one of the most common known implementation of DFS; although there are many other implementations like: Ceph, GlusterFS,…etc. Introduction to clouds. Distributed Computing Systems. Distributed computing can also be known as using a distributed system to solve a single large problem by breaking it up to tasks, each of which is computed in separate computers of the distributed system. Source: https://www.akuaroworld.com/telecom-oss-new-network-architecture/soa-model/.