NETWORK COMMUNICATIONS SOFTWARE

Communications software provides many functions in a network. These functions include error checking, message formatting, communications logs (listings of all jobs and communications in a specified period of time), data security and privacy, and translation capabilities. These functions are performed by various parts of network communications software, which includes network operating systems, network management software, and protocols.

Network Operating Systems

A network operating system (NOS)is systems software that controls the hardware devices, software, and communications media and channels across a network. The NOS enables various devices to communicate with each other. NetWare by Novell and Windows NT from Microsoft are popular network operating systems for LANs.

Network Management Software

Network management softwarehas many functions in operating a network. These functions reduce time spent on routine tasks, such as remote, electronic installation of new software on many devices across a network. They also provide faster response to network problems, greater control over the network, and remote diagnosing of problems in devices connected to the network. In short, network management software performs functions that decrease the human resources needed to manage the network.

Protocols

Computing devices that are connected to the network (often referred to as “nodes” of the network) access and share the network to transmit and receive data. These components work together by adhering to a common set of rules that enable them to communicate with each other. This set of rules and procedures governing transmission across a network is a protocol.

The principal functions of protocols in a network are line access and collision avoidance. Line access concerns how the sending device gains access to the network to send a message. Collision avoidance refers to managing message transmission so that two messages do not collide with each other on the network. Other functions of protocols are to identify each device in the communication path, to secure the attention of the other device, to verify correct receipt of the transmitted message, to verify that a message requires retransmission because it cannot be correctly interpreted, and to perform recovery when errors occur.

Ethernet.The most common protocol is Ethernet10BaseT. Over three-fourths of all networks use the Ethernet protocol. The 10BaseT means that the network has a speed of 10 Mbps. Fast Ethernet is 100BaseT, meaning that the network has a speed of 100 Mbps. The most common protocol in large corporations is the Gigabit Ethernet. That is, the network provides data transmission speeds of one billion bits per second (666 times faster than a T1 line). However, ten-gigabit Ethernet is becoming the standard (ten billion bits per second).

TCP/IP.The Transmission Control Protocol/Internet Protocol (TCP/IP)is a file transfer protocol that can send large files of information across sometimes-unreliable networks with assurance that the data will arrive in uncorrupted form. TCP/IP allows efficient and reasonably error-free transmission between different systems and is the protocol of the Internet. As we will see in Chapter 7, TCP/IP is becoming very popular with business organizations due to its reliability and the ease with which it can support intranets and related functions.

Communication between protocols.Network devices from different vendors must communicate with each other by following the same protocols. Unfortunately, commercially available data communication devices follow a number of different protocols, causing substantial problems with data communications networks.

Attempts at standardizing data communications have been somewhat successful, but standardization in the United States has lagged behind that in other countries where the communications industry is more closely regulated. Various organizations, including the Electronic Industries Association (EIA), the Consultative Committee for International Telegraph and Telephone (CCITT), and the International Standards Organization (ISO) have developed electronic interfacing protocols that are widely used within the industry.

Typically, the protocols required to achieve communication on behalf of an application are actually multiple protocols existing at different levels or layers. Each layer defines a set of functions that are provided as services to upper layers, and each layer relies on services provided by lower layers. At each layer, one or more protocols define precisely how software on different systems interacts to accomplish the functions for that layer. This layering notion has been formalized in several architectures. The most widely known is the reference model of the International Standards Organization Open Systems Interconnection (ISO-OSI), which has seven layers. See the book’s Web site for a table that details the seven layers of the OSI model protocol.

 


Lecture №8.

Subject: CYBER SECURITY

 

1.WHAT IS CYBER SECURITY?

2.WHY IS CYBER SECURITY IMPORTANT

Network outages, data compromised by hackers, computer viruses and other incidents affect our lives in ways that range from inconvenient to life-threatening. As the number of mobile users, digital applications and data networks increase, so do the opportunities for exploitation.

WHAT IS CYBER SECURITY?

Cyber security, also referred to as information technology security, focuses on protecting computers, networks, programs and data from unintended or unauthorized access, change or destruction.

WHY IS CYBER SECURITY IMPORTANT?

Governments, military, corporations, financial institutions, hospitals and other businesses collect, process and store a great deal of confidential information on computers and transmit that data across networks to other computers. With the growing volume and sophistication of cyber attacks, ongoing attention is required to protect sensitive business and personal information, as well as safeguard national security.

During a Senate hearing in March 2013, the nation's top intelligence officials warned that cyber attacks and digital spying are the top threat to national security, eclipsing terrorism.

What are the possible relations between these two fields of security?

- Cyber security is subset of Information security.

- Information security is subset of Cyber security.

- They are slightly different, with common intersection.

Let start with definitions and then discuss.

Information Security (INFOSEC)

ISO/IEC 27000:2009 definition 2.33

Information security - preservation of confidentiality, integrity and availability of information. Note 1 to entry: In addition, other properties, such as authenticity, accountability, non-repudiation and reliability can also be involved.

CNNS definition

The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability.

Cyber security (CYBERSEC)

Definition of cybersecurity, referring to ITU-T X.1205

Cybersecurity is the collection of tools, policies, security concepts, security safeguards, guidelines, risk management approaches, actions, training, best practices, assurance and technologies that can be used to protect the cyber environment and organization and user’s assets.

Organization and user’s assets include connected computing devices, personnel, infrastructure, applications, services, telecommunications systems, and the totality of transmitted and/or stored information in the cyber environment.

Cybersecurity strives to ensure the attainment and maintenance of the security properties of the organization and user’s assets against relevant security risks in the cyber environment. The general security objectives comprise the following:

• Availability

• Integrity, which may include authenticity and non-repudiation

• Confidentiality

Simple cyber security definition

When preparing Cybersec law in Slovakia we proposed definition: “Set of measures, activities, tools and things ensuring protection of cyberspace against cyber threats and cyberspace vulnerabilities.”

It means that in Cyber Security we are dealing only with threats via cyberspace (not threats for Cyberspace such as physical disasters to datacentre, non-availability of electric power, direct sabotage, stealing tablet or smartphone …).

Cyberspace

We need also some definition of cyber environment or cyberspace. Have a look on the NATO definitions, see Cyber Definitions You have a bulk of definitions from member states and international organizations.

Simple and useful is International Organization for Standardization definition: “The complex environment resulting from the interaction of people, software and services on the Internet by means of technology devices and networks connected to it, which does not exist in any physical form.”

Differences: INFOSEC vs. CYBERSEC

• INFOSEC is dealing also with information in paper form, CYBERSEC no.

• INFOSEC is typically not dealing with:

o Cyber-warfare.

o Information warfare.

o Negative social impacts of interaction of people, software and services on the Internet such as:

 Sexual abuse of children over Internet.

 On-line radicalisation.

 Cyber stalking.

o Critical infrastructure protection (control systems).

o Part of the IoT security, where no processors are used (some controllers, passive RFID or so).

• Cyber Security is not dealing with preservation of confidentiality, integrity and availability of information, when using physical, administration or personal security.

But what can be common term, which includes all CYBERSEC and all non-paper part of INFOSEC? It can be: “Cyberspace Protection”- it means protect cyberspace against all threats and vulnerabilities from virtual and real, physical world.

Imagine a Venn diagram, where Information Security and Cyber Security have a large overlap. Cyber Security concerns itself with security in the "Cyber" realm or dimension, and will include, for example, the security of your company's personnel on social media websites, the propensity of certain attackers to attack your assets or brand, the exposure of your SCADA infrastructure (which may include, for example, your data center's HVAC infrastructure) to attacks, etc. etc., while Information Security mostly concerns itself with your digital assets and their confidentiality, integrity and availability.

At the operational level, Information Security will usually start out by asking "what are my valuable digital assets" and look to holistically protect them. Cyber Security will usually start out with "who wants to harm what" and look to defend against them. While more often than not the conclusions of both will converge, the different points of view will often lead to differing prioritization of resources in dealing with the issues.

Most Information Security professionals will claim to have been doing Cyber security the whole time, and many will be correct, if they've been assessing threats and threat actors as part of their methodologies. But as the Cyber realm grows in both sheer size and (more importantly) in our social and economic dependence on it, a holistic view that includes more than just assessing our own assets, is necessary for true security.

Imagine if you will an attack on your CEO through their personal Facebook page, so as to extort money or favors from them directly related to their work. Is this an Information Security issue? Are your digital assets at risk? Is this a Cyber issue (hint: yes)? Should the Information Security officer be involved or should this be just a law-enforcement issue? These are issues faced by companies today as the perimeters between work and personal in Cyberspace disappear.

CYBER SECURITY GLOSSARY OF TERMS

Learn cyber speak by familiarizing yourself with cyber security terminology.

Access - The ability and means to communicate with or otherwise interact with a system, to use system resources to handle information, to gain knowledge of the information the system contains or to control system components and functions.

Active Attack − An actual assault perpetrated by an intentional threat source that attempts to alter a system, its resources, its data or its operations.

Blacklist−A list of entities that are blocked or denied privileges or access.

Bot−A computer connected to the Internet that has been surreptitiously/secretly compromised with malicious logic to perform activities under the remote command and control of a remote administrator.

Cloud Computing−A model for enabling on-demand network access to a shared pool of configurable computing capabilities or resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

Critical Infrastructure−The systems and assets, whether physical or virtual, so vital to society that the incapacity or destruction of such may have a debilitating impact on the security, economy, public health or safety, environment or any combination of these matters.

Cryptography−The use of mathematical techniques to provide security services, such as confidentiality, data integrity, entity authentication and data origin authentication.

Cyber Space−The interdependent network of information technology infrastructures, that includes the Internet, telecommunications networks, computer systems and embedded processors and controllers.

Data Breach−The unauthorized movement or disclosure of sensitive information to a party, usually outside the organization, that is not authorized to have or see the information.

Digital Forensics−The processes and specialized techniques for gathering, retaining and analyzing system-related data (digital evidence) for investigative purposes.

Enterprise Risk Management−A comprehensive approach to risk management that engages people, processes and systems across an organization to improve the quality of decision making for managing risks that may hinder an organization's ability to achieve its objectives.

Information Assurance−The measures that protect and defend information and information systems by ensuring their availability, integrity and confidentiality.

Intrusion Detection−The process and methods for analyzing information from networks and information systems to determine if a security breach or security violation has occurred.

Key−The numerical value used to control cryptographic operations, such as decryption, encryption, signature generation or signature verification.

Malware−Software that compromises the operation of a system by performing an unauthorized function or process.

Passive Attack−An actual assault perpetrated by an intentional threat source that attempts to learn or make use of information from a system but does not attempt to alter the system, its resources, its data or its operations.

Penetration Testing−An evaluation methodology whereby assessors search for vulnerabilities and attempt to circumvent the security features of a network and/or information system.

Phishing−A digital form of social engineering to deceive individuals into providing sensitive information.

Root−A set of software tools with administrator-level access privileges installed on an information system and designed to hide the presence of the tools, maintain the access privileges and conceal the activities conducted by the tools.

Software Assurance−The level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its lifecycle, and that the software functions in the intended manner.

Virus−A computer program that can replicate itself, infect a computer without permission or knowledge of the user and then spread or propagate to another computer.

Whitelist−A list of entities that are considered trustworthy and are granted access or privileges.

 

Lecture №9

Subject: Internet technologies.

Content:Basic concepts of the Internet. Universal Resource Identifier (URI), and its purpose parts. The DNS service. Web-technologies: HTTP, DHTML, CSS, and JavaScript. E-mail. Message Format. Protocols SMTP, POP3, IMAP.

 

The Internet is the global system of interconnected computer networks that use the Internet protocol suite (TCP/IP) to link billions of devices worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, voice over IP telephony, and peer-to-peer networks for file sharing.

The WWW is a collection of Internet sites that can be accessed by using a hypertext interface. Hypertext documents on the web contain links to other documents located anywhere on the web. By clicking on a link, you are immediately taken to another file or site to access relevant materials. The interesting thing about Hypertext links is that the links might take you to related material on another computer located anywhere in the world, rather than just to a file on your local hard drive.

Basic WWW Concepts:

1) BROWSER - A WWW browser is software on your computer that allows you to access the World Wide Web. Examples include Netscape Navigator and Microsoft Internet Explorer. Please know that a browser can’t work its magic unless you are somehow connected to the Internet. At home, that is normally accomplished by using a modem that is attached to your computer and your phone line and allows you to connect to, or dial-up, an Internet Service Provider (ISP). At work, it may be accomplished by connecting your workplace’s local area network to the Internet by using a router and a high speed data line.

2) HYPERTEXT AND HYPERMEDIA - Hypertext is text that contains electronic links to other text. In other words, if you click on hypertext it will take you to other related material. In addition, most WWW documents contain more than just text. They may include pictures, sounds, animations, and movies. Documents with links that contain more than just text are called hypermedia.

3) HTML (HYPERTEXT MARKUP LANGUAGE) - HTML is a set of commands used to create world wide web documents. The commands allow the document creator to define the parts of the document. For example, you may have text marked as headings, paragraphs, bulleted text, footers, etc. There are also commands that let you import images, sounds, animations, and movies as well as commands that let you specify links to other documents.

4) URL (UNIFORM RESOURCE LOCATOR)- Links between documents are achieved by using an addressing scheme. That is, in order to link to another document or item (sound, picture, movie), it must have an address. That address is called its URL. The URL identifies the host computer name, directory path, and file name of the item. It also identifies the protocol used to locate the item such as hypertext, gopher, ftp, telnet or news.

5) HTTP (HYPERTEXT TRANPORT PROTOCOL) - HTTP is the protocol used to transfer hypertext or hypermedia documents.

6) HOME PAGE - A home page is usually the starting point for locating information at a WWW site.

7) CLIENTS AND SERVERS - If a computer has a web browser installed, it is known as a client. A host computer that is capable of providing information to others is called a server. A server requires special software in order to provide web documents to others.

IP Addresses.

In order to identify all the computers and other devices (printers and other networked peripherals) on the Internet, each connected machine has a unique number, called an "IP address". IP stands for "Internet Protocol," the common language used by machines on the Internet to share information.

An IP address is written as a set of 4 numbers, separated by periods, as in: 203.183.184.10

This representation is sometimes referred to as dotted-octet representation of an IP address.

One single network, for example, might typically have all the IP addresses starting with 203.183.184 (203.183.184.0 through 203.183.184.255). This has been a customary way of distributing IP numbers - in chunks of 256 addresses. This is referred to as a "Class C" network. If numbers are distributed as Class C networks it means there are only a possible 256 to the 3rd power different networks - or just 16 million networks - in the whole world.

Basically you should understand that you need a fixed, predefined IP address assigned to each machine that acts as a server to the outside world. You can get an IP address assignment from your network administrator, who receives them in turn from your network's Internet provider.

Domains & Sub-domains.

Each machine having its own unique IP address is great for machines communicating with each other, but quite difficult for humans to remember.

For example, the mail server at Web Crossing Harbor has an IP address 210.226.166.200. You could send email to doug@210.226.166.200, but it isn't very convenient for a lot of reasons. For one thing, it is difficult to remember. For another, we might need to move the mail server to a different machine (with a different IP address) someday. Then we would have to tell everybody our new IP address in order to receive mail.

To solve this problem a system of giving easy-to-remember names to IP addresses was created. This system is called the domain system. There are several top-level domains, and all other names fall under that in a hierarchy of sub-domains.

There are two basic kinds of top-level domains - those based on type of activity and those based on geographical location:

 

 

Some Activity Based Domains
.com Perhaps the most well-known top-level domain. Originally it was designated for use by companies and commercial activities. Now it can be used by anybody for any purpose.
.org Originally designated for use by nonprofit organizations and individuals, now it can be used for any purpose.
.net Originally designated for use by network organizations (such as Internet providers). Now it can be used for any purpose.
.gov For governmental organizations in the United States.
.mil For military organizations in the United States.
.edu For four-year degree-granting colleges and universities only.

 

Servers

A server is just a host that serves something. Some examples are:

- web servers - computers that serve web pages. People connect to web servers using browsers, such as Netscape Navigator or Internet Explorer.

- FTP servers - People connect to them for file transfer, using a browser or a specialized FTP program, such as Fetch (on a Mac) or FTP Explorer (on Windows).

- mail servers - People connect to them to send and receive mail, using such programs as Eudora, Netscape Mail, Claris Mail and Microsoft Outlook Express.

- Web Crossing - a server that lets users create and use online communities, including forums and chat and other services.

DNS.

The Domain Name System (DNS) is a hierarchical decentralized naming system for computers, services, or any resource connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates more readily memorized domain names to the numerical IP addresses needed for the purpose of locating and identifying computer services and devices with the underlying network protocols. By providing a worldwide, distributed directory service, the Domain Name System is an essential component of the functionality of the Internet.

The Domain Name System delegates the responsibility of assigning domain names and mapping those names to Internet resources by designating authoritative name servers for each domain. Network administrators may delegate authority over sub-domains of their allocated name space to other name servers. This mechanism provides distributed and fault tolerant service and was designed to avoid a single large central database.

The Domain Name System also specifies the technical functionality of the database service which is at its core. It defines the DNS protocol, a detailed specification of the data structures and data communication exchanges used in the DNS, as part of the Internet Protocol Suite. Historically, other directory services preceding DNS were not scalable to large or global directories as they were originally based on text files, prominently the HOSTS.TXT resolver. The Domain Name System has been in use since the 1980s.

The Internet maintains two principal namespaces, the domain name hierarchy and the Internet Protocol (IP) address spaces. The Domain Name System maintains the domain name hierarchy and provides translation services between it and the address spaces. Internet name servers and a communication protocol implement the Domain Name System. A DNS name server is a server that stores the DNS records for a domain; a DNS name server responds with answers to queries against its database.

The most common types of records stored in the DNS database are for Start of Authority (SOA), IP addresses (A and AAAA), SMTP mail exchangers (MX), name servers (NS), pointers for reverse DNS lookups (PTR), and domain name aliases (CNAME). Although not intended to be a general purpose database, DNS can store records for other types of data for either automatic lookups, such as DNSSEC records, or for human queries such as responsible person (RP) records. As a general purpose database, the DNS has also been used in combating unsolicited email (spam) by storing a real-time blackhole list. The DNS database is traditionally stored in a structured zone file.

HTTP.

The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web.

Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext.

Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Standards development of HTTP was coordinated by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), culminating in the publication of a series of Requests for Comments (RFCs). The first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was obsoleted by RFC 2616 in 1999.

A later version, the successor HTTP/2, was standardized in 2015, and is now supported by major web servers.

JavaScript

JavaScript is a high-level, dynamic, untyped, and interpreted programming language. It has been standardized in the ECMAScript language specification. Alongside HTML and CSS, it is one of the three core technologies of World Wide Web content production; the majority of websites employ it and it is supported by all modern Web browsers without plug-ins. JavaScript is prototype-based with first-class functions, making it a multi-paradigm language, supporting object-oriented, imperative, and functional programming styles. It has an API for working with text, arrays, dates and regular expressions, but does not include any I/O, such as networking, storage, or graphics facilities, relying for these upon the host environment in which it is embedded.

Although there are strong outward similarities between JavaScript and Java, including language name, syntax, and respective standard libraries, the two are distinct languages and differ greatly in their design.

JavaScript is also used in environments that are not Web-based, such as PDF documents, site-specific browsers, and desktop widgets. Newer and faster JavaScript virtual machines (VMs) and platforms built upon them have also increased the popularity of JavaScript for server-side Web applications. On the client side, JavaScript has been traditionally implemented as an interpreted language, but more recent browsers perform just-in-time compilation. It is also used in game development, the creation of desktop and mobile applications, and server-side network programming.

E-mail.

Electronic mail is a method of exchanging digital messages between computer users; Email first entered substantial use in the 1960s and by the 1970s had taken the form now recognised as email. Email operates across computer networks, which in the 2010s is primarily the Internet. Some early email systems required the author and the recipient to both be online at the same time, in common with instant messaging. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver, and store messages. Neither the users nor their computers are required to be online simultaneously; they need to connect only briefly, typically to a mail server, for as long as it takes to send or receive messages.

POP3.

In computing, the Post Office Protocol (POP) is an application-layer Internet standard protocol used by local e-mail clients to retrieve e-mail from a remote server over a TCP/IP connection. POP has been developed through several versions, with version 3 (POP3) being the last standard in common use before largely being made obsolete by the more advanced IMAP. In POP3, e-mails are downloaded from the server's inbox to your computer. E-mails are available when you are not connected.

SMTP

Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (email) transmission. First defined by RFC 821 in 1982, it was last updated in 2008 with the Extended SMTP additions by RFC 5321—which is the protocol in widespread use today.

SMTP by default uses TCP port 25. The protocol for mail submission is the same, but uses port 587. SMTP connections secured by SSL, known as SMTPS, default to port 465 (nonstandard, but sometimes used for legacy reasons).

Although electronic mail servers and other mail transfer agents use SMTP to send and receive mail messages, user-level client mail applications typically use SMTP only for sending messages to a mail server for relaying. For retrieving messages, client applications usually use either POP3 or IMAP.

Although proprietary systems (such as Microsoft Exchange and IBM Notes) and webmail systems (such as Outlook.com, Gmail and Yahoo! Mail) use their own non-standard protocols to access mail box accounts on their own mail servers, all use SMTP when sending or receiving email from outside their own systems.

IMAP.

In computing, the Internet Message Access Protocol (IMAP) is an Internet standard protocol used by e-mail clients to retrieve e-mail messages from a mail server over a TCP/IP connection. IMAP is defined by RFC 3501.

IMAP was designed with the goal of permitting complete management of an email box by multiple email clients, therefore clients generally leave messages on the server until the user explicitly deletes them. An IMAP server typically listens on port number 143. IMAP over SSL (IMAPS) is assigned the port number 993.

Virtually all modern e-mail clients and servers support IMAP. IMAP and the earlier POP3 (Post Office Protocol) are the two most prevalent standard protocols for email retrieval, with many webmail service providers such as Gmail, Outlook.com and Yahoo! Mail also providing support for either IMAP or POP3.

 

Lecture №10

Subject:Cloud and Mobile technologies.

Content:Data Centers. Trends in the development of modern infrastructure solutions. The principles of cloud computing. Virtualization technologies. The Web-service Cloud. Basic terms and concepts of mobile technology. Mobile services. mobile technology standards.

 

Cloud technologies.Today in area of information technologies start to be in the lead "cloudy technologies". They not only transforming the face of the information and communication technologies, but also radically altered activity of the society.

This idea was formulated in 60 years, J. McCarthy, but after forty years of service calculation has been supported by companies, Amazon, Apple, Google, HP, IBM, Microsoft and Oracle. The cloud is actually a model that enables the user in any time convenient to them to gain access to computer resources. For example, Apple with 2011 offers the opportunity to its users in the network cloud to store music, video, movies, and personal information.

The specificity of these technologies is that data processing takes place not on a personal computer and the Internet is served on special. The specificity of these technologies is that data processing is not on the desktop, and on special server the Internet. Developers in the field of computer simulation we can post programmatic complexes on network resources. If the user had to keep buying the product, now he's become the tenant of various services.

As with any new technology, this new way of working-new risks and challenges, especially when considering the security and privacy of information to be stored and processed within the cloud.

One of risks of cloudy technologies, that users, who are owners of the information, loses the control over the data when they provide data in a cloud for processing. And communications with it the risk of disclosing of data, so also a trust problem to them significantly increases. In fact nobody can guarantee today to the user, that its data will not be looked through and be analyzed by the company which renders cloudy services. Cloud services are a way to get access to the information resources of any level and any power using only Internet access and a Web browser. There are three models of cloud: SaaS (software as a service) and IaaS (infrastructure as a service), and DaaS (data-as-a-service) to use in business, education, medicine, science, leisure, etc.

Cloud computing.Cloud computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network.

Advocates claim that cloud computing allows companies to avoid upfront infrastructure costs (e.g., purchasing servers). As well, it enables organizations to focus on their core businesses instead of spending time and money on computer infrastructure. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables Information Technology (IT) teams to more rapidly adjust resources to meet fluctuating and unpredictable business demand. Cloud providers typically use a "pay as you go" model. This will lead to unexpectedly high charges if administrators do not adapt to the cloud pricing model.

In 2009, the availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, and autonomic and utility computing led to a growth in cloud computing. Companies can scale up as computing needs increase and then scale down again as demands decrease. In 2013, it was reported that cloud computing had become a highly demanded service or utility due to the advantages of high computing power, cheap cost of services, high performance, scalability, accessibility as well as availability. Some cloud vendors are experiencing growth rates of 50% per year, but being still in a stage of infancy, it has pitfalls that need to be addressed to make cloud computing services more reliable and user friendly.

Mobile technology.Mobile technology is a collective term used to describe the various types of cellular communication technology. Mobile CDMA technology has evolved quite rapidly over the past few years. Since the beginning of this millennium, a standard mobile device has gone from being no more than a simple two-way pager to being a cellular phone, GPS navigation system, an embedded web browser, and Instant Messenger client, and a hand-held video gaming system. Many experts argue that the future of computer technology rests in mobile/wireless computing.

Standards of mobile technology:

1) GSM (Global System for Mobile Communications, originally Groupe SpécialMobile), is a standard developed by the European Telecommunications Standards Institute (ETSI) to describe the protocols for second-generation (2G) digital cellular networks used by mobile phones, first deployed in Finland in July 1991. As of 2014 it has become the default global standard for mobile communications - with over 90% market share, operating in over 219 countries and territories.

2G networks developed as a replacement for first generation (1G) analog cellular networks, and the GSM standard originally described a digital, circuit-switched network optimized for full duplex voice telephony. This expanded over time to include data communications, first by circuit-switched transport, then by packet data transport via GPRS (General Packet Radio Services) and EDGE (Enhanced Data rates for GSM Evolution or EGPRS).

Subsequently, the 3GPP developed third-generation (3G) UMTS standards followed by fourth-generation (4G) LTE Advanced standards, which do not form part of the ETSI GSM standard.

2) General Packet Radio Service (GPRS) is a packet oriented mobile data service on the 2G and 3G cellular communication system's global system for mobile communications (GSM). GPRS was originally standardized by European Telecommunications Standards Institute (ETSI) in response to the earlier CDPD and i-mode packet-switched cellular technologies. It is now maintained by the 3rd Generation Partnership Project (3GPP).

GPRS usage is typically charged based on volume of data transferred, contrasting with circuit switched data, which is usually billed per minute of connection time. Usage above the bundle cap is charged per megabyte, speed limited, or disallowed.

GPRS is a best-effort service, implying variable throughput and latency that depend on the number of other users sharing the service concurrently, as opposed to circuit switching, where a certain quality of service (QoS) is guaranteed during the connection. In 2G systems, GPRS provides data rates of 56–114 kbit/second. 2G cellular technology combined with GPRS is sometimes described as 2.5G, that is, a technology between the second (2G) and third (3G) generations of mobile telephony. It provides moderate-speed data transfer, by using unused time division multiple access (TDMA) channels in, for example, the GSM system. GPRS is integrated into GSM Release 97 and newer releases.

3) Enhanced Data rates for GSM Evolution (EDGE) (also known as Enhanced GPRS (EGPRS), or IMT Single Carrier (IMT-SC), or Enhanced Data rates for Global Evolution) is a digital mobile phone technology that allows improved data transmission rates as a backward-compatible extension of GSM. EDGE is considered a pre-3G radio technology and is part of ITU's 3G definition. EDGE was deployed on GSM networks beginning in 2003 – initially by Cingular (now AT&T) in the United States.

EDGE is standardized also by 3GPP as part of the GSM family. A variant, so called Compact-EDGE, was developed for use in a portion of Digital AMPS network spectrum.

Through the introduction of sophisticated methods of coding and transmitting data, EDGE delivers higher bit-rates per radio channel, resulting in a threefold increase in capacity and performance compared with an ordinary GSM/GPRS connection.

EDGE can be used for any packet switched application, such as an Internet connection.

Evolved EDGE continues in Release 7 of the 3GPP standard providing reduced latency and more than doubled performance e.g. to complement High-Speed Packet Access (HSPA). Peak bit-rates of up to 1 Mbit/s and typical bit-rates of 400 kbit/s can be expected.

 

Lecture №11.

Subject: Multimedia Technologies








Дата добавления: 2018-11-25; просмотров: 671;


Поиск по сайту:

При помощи поиска вы сможете найти нужную вам информацию.

Поделитесь с друзьями:

Если вам перенёс пользу информационный материал, или помог в учебе – поделитесь этим сайтом с друзьями и знакомыми.
helpiks.org - Хелпикс.Орг - 2014-2024 год. Материал сайта представляется для ознакомительного и учебного использования. | Поддержка
Генерация страницы за: 0.099 сек.