Networking Bases

I need a one page term paper over the following VPC information below. I have provided all the notes on the subject to include components of the Amazon VPC
As you learned this week, a virtual private cloud (VPC) enables you to launch resources into a virtual network that you’ve configured and gives you full control over those resources. That provides a level of security over a public cloud when you need to control your resources and sensitive data over a virtual infrastructure. But VPCs also have drawbacks, including increased configuration complexity.
• Question 1: What are the components of an Amazon VPC?
• Question 2: What is the default VPC and what are its advantages?


Computer is network is two or more computers connected
IPv4 (32-bit) address
IPv6(128) bit) address 8 groups
CIDR – is the first address of the network Classless inter-Domain Routing. Application Layer
Application 7 Means for an application to access a computer network
Presentation 6 Ensures the application layer can read the data Encryption
Session 5 Enables orderly exchange of date
Transport 4 Provides protocols to support host-to-house communication
Network 3 Routing and packet forwarding (routers)
Data Link 2 Transfer data in the same LAN network) hubs and switches)
Physical 1 Transmission and reception of raw bit streams over a physical medium Signals (1s and 0s)(1s

Amazon VPC
Enable you to provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define.
Gives you control over your virtual networking resources including
a. Selection of IP address range
b. Creation of subnets
c. Configuration of route tables and network gates
Enables you to customize the network configuration for your VPC
Enables you to use multiple layers of security.
Can use both IPVP4 and IPVP6
VPCs: enables you to
Logically isolated from other VPCs
Dedicated to your AWS account
Belong to a single AWS Region and can span multiple Availability Zones

Range of IP address that divide a VPC
Belong to a single Availability
Classified as public or private.

IP addressing
When you create a VPC, you assign it to an IPv4 CIDR block (range of private IPv4 Addresses).
You cannot change the address range after you create the VPC
The largest IPv4 CIDR block size is /16
The smallest IPv4 CIDR block size is/28.
IPv6 is also supported (with a different block size limit).
CIDR blocks of subnets cannot overlap

AWS reserve 5 IP address for CIDR block

  1. Network address
  2. Internal communication
  3. Domain Name System (DNS) resolution
  4. Future use
  5. Network broadcast address
    Public IPv4 address
    a. Manually assigned through an Elastic IP address
    b. Automatically assigned through the auto-assign public IP address settings at the subnet level.
    Elastic IP Address
    a. Associated with an AWS account
    b. Can be allocated and remapped anytime
    c. Additional cost might apply

An elastic network interface is a virtual network interface that you can:
a. Attach to an instance
b. Detach from the instance, and attach to another instance to redirect network traffic
c. It’s attributes follow when it is reattached to a new instance.
d. Each instance in your VPC has a default network interface that is assigned a private IPv4 address from the IPv4 address range of your VPC.

Routes tables and routes
a. A route table contains a set of rules (or routes) that you can configure to direct network traffic from you subnet.
b. Each route specifics a destination and a target
c. By default, every route table contains a local route for communication within the VPC

VPC sharing enables customers to share subnets with others AWS accounts in the same organization. VPC sharing enables multiple AWS accounts to create their application resources-such as Amazon EC2 instances, Amazon Relational Database Service (or Amazon RDS) database, Amazon Redshift clusters, and AWS Lambda function-into shared centrally managed VPCS. In this model the account that owns the VPC hares one of the more subnets with another accounts called participants that belong to the same organization. After a subnets is shared, participants can view, create, modify, and delete their application resources in subnets that are shared with the.
VPC peering A VPC peering connection enables you to privately route traffic between tow VPCs. Instances in either VPC can communication with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in anther AWS Account, or with a VPC in a different AWA Region.
VPC peering has some restrictions:
a. IP address ranges cannot overlade
b. You can only have peering resource between the same two VPCs.
c. Transitive peering is not supported. This means the traffic from a VPC goes to the second VPC and it stops there. It cannot go to a third VPC.
By default, instances that you launch into an Amazon VPC can’t communicate with your own (remote) You can enable access to your remote network from your VPC by attaching a virtual private gateway to the VPC, creating a custom route table, updating your security group rules, creating an AWS site-to site VPN) connection, and configuring routing to pass traffic through the connection.
Network address translation (NAT) gateway – A network address translation (or NAT) gateway enables instances in a private subnet to connect to the internet or other AWS services, but prevents the public internet from initiating a connection with those instances.
AWS Transit Gateway – We have mentioned the idea of VPC peering to connect two VPCs Consider how would you connect 100s of VPCs together. Each VPC pair will require a decimated VPC peering connection. The complexity of connectivity can become a heavy burden. A transit gateway is a network transit hub that you can use to interconnect your virtual private clouds (VPC) and on-premises networks. You can attack a VPCs, AWS direct connection gateways. The topology becomes a hub and spoke which reduces the number of connections required and the complexity to implement and maintain it.

VPC Security groups act at the instance level A security group acts as a virtual firewall that controls inbound and outbound traffic to and from you instances. Security gouts act as the instance level, and you can assign each instances in your VPC. VPC subnets to a different set of security groups.
a. Security groups have rules to manage instance traffic.
b. Default security groups are sealed shut to inbound traffic. We need to define rules.
c. Security groups are stateful. The outbound traffic is always allowed.
d. Security groups are the equivalent of firewalls for your EC2 instnaces.
The second firewall option is Network access control list, network ACLs work at the subnet level and control traffic in and out of the subnet.
a. A network ACL has separate inbound and outbound rules, and each rule can either allow or deny traffic.
b. Default network ACLs allow all inbound and outbound IPv4 traffic.
c. Network ACLs are stateless
Each subnet in your VPC must be associated with a network ACL if you don’t explicitly associate a subnet with a network ACL, the default network ACL is used. You can associate a network ACL with multiples subnets, however, a subnet can only be associated with one network ACL A network ACL is stateless. It has separate inbound and outbound rules that require configuration.
The table shows a default network ACL. It is wide open in order to have your focus on security groups as firewall protection. You can define a customs NACL and this will require that you define rules un numerical order and define both the inbound and outbound traffic to be allowed. Network ACLs are stateless, which means that no information about a request is maintained after a request is processed.
Security groups versus network ACLs
Attribute Security Groups Network ACLs
Scope Instance level Subnet level
Supported Rules Allow rules only Allow and deny rules
State Stateful (return traffic is Stateless (return traffic must be
Regardless of (rules explicitly allowed by rules)

Order of Rules all rules are evaluated before Rules are evaluated in number order
Decision to allow traffic before decision to allow traffic

Amazon route 52 DNS resolution
DSN resolution is the process of translating an internet name to the corresponding IP address. The DNS protocol stands for Domain Name System and it functions like a phone book where internet names are replaced for the IP address fo the corresponding machine.
Amazon route 53 – gives you the ability to register domain names such as and have the service handle the names and hos treated to the domain. Route 53 is highly available, scalable and fully complaint with IPv4 and IPv6.
Amazon route 53 supported routing – Amazon route 53 supports several types of routing policies, which determine how it responds to name resolution queries:
Simple routing lets you configure standard DNS records. With simple routing you typically route traffic to a single resource, for example to a web server for your website.
With weighted routing you assign weights to resource record sets to specify the frequency with which different responses are served. You might want to use this capability to do A/B testing, also known as blue/green deployment. With a blue/green deployment you send a small portion of traffic to a server where you made a software change in order to verify it’s all working. For example, you might send 99% of the traffic to system A and 1% of the traffic to system B.
You can use latency routing when you want the response to arrive in the fastest way possible. Route 53 determines the faster way to deliver a response. This does not always mean the shortest path will be used, especially if it’s saturated and slow.
Geolocation routing lets you choose the resources that server your traffic based on the geographic location of your users.
Geolocation routing determines where the request has originated in terms of geographic location and responds will the address of the closest point of access to your service. When you use geolocation routing, you can localize your content and present some or all of your website in the language of the geographic routing to restrict the distribution of contents to only the locations where you have distribution rights.
Failover routing (or DNS failover) can be used to help detect an outage of your website and redirect your users to alternate locations where your application is operating properly.
Failover routing requires a health check to be configured. If the health check fails the secondary address becomes the target.
You can also combine any of the other routing options into a multiple value response.
Use case: Multi-region deployment
Multi-Region deployment is an example use case for Amazon Route 53. With Amazon route 53, the user is automatically directed to the Elastic Load Balancing load balancer that’s closest to the user. Multi-region deployment of Route 53 enables;
Latency-based routing to the Region,
Load balancing routing to the Availability zone
Amazon route 53 DNS failover
Amazon Route 53 DNS failure enables you to improve the availability of your applications that run on AWS by:
Configuring backup and failover scenarios fro your applications.
Enabling highly available multi-Region architectures on AWS and
Creating health checks to monitor the health and performance of your web applications, web servers, and other resources. Each health check that you create can monitor the health of a specified resource, such as a web server- the status other health checks; and the status of an amazon CloudWatch alarm.
DNS failover for multi-tiered web application
Here you see how DNS failover works in a typical architecture for a multi-tiered web application. Route 53 passes traffic to a load balancer, which then distributes the traffic to a fleet of EC2 instances.
To ensure high availability, you can create two DNS records for the Canonical Name Record (or CHAME) www with a routing policy of Failover routing. The first record is the primary route policy, which points to the load balancer for your web application. The second record is the secondary route policy, which points to your static Amazon S# website. You can use Route 53 health check to make sure that the primary route is available. If it is, all traffic defaults to your web application stack. Failovers to the static backup site would be triggered if either the web server went down, or the database instance when down.
Amazon Route 53 supported routing.
a. Simple routing – Use in single-server environments
b. Weighted routing – Assign weights to resource record sets to specify the frequency.
c. Latency routing – Help improve your global applications
d. Geolocation routing – Route traffic based on location of your users
e. Geoproximity routing – Route traffic based on location of your resources.
f. Failover routing – Fail over to a backup site if your primary site becomes unreachable
g. Multivalue answer routing – Respond to DNS queries with up to eight healthy records selected at random

Amazon CloudFront
Amazon CloudFront is a fast CDN service that securely delivers data, videos, applications and applications programming interfaces (or APIS to customers globally with low latency and high transfer speeds. It also provides a developer-friendly environment Amazon CloudFront delivers files to users over a global network of edge locations and Regional edge caches. Amazon Cloud front is different from traditional content delivery solutions because you can take advantage of high performance content delivery without negotiated contracts, high prices, or minimum fees. Like other AWS services, Amazon CloudFront is a self-service offering with pay-as-go pricing.
Amazon CloudFront infrastructure
Amazon CludFront relies on route 53’s geolocation routing. Basically, a customer makes a request. Route 53 finds out where the customers are located in the world and it responds with the IP address of the edge location closest to that customer. CloudFront then obtains the data from where are normally lives and copies it to the edge location. Then the customer users experience begins. As data becomes stale, it is removed from the cache at the edge location in order to make room for new content. You can define the expiration of date in the cache using a time-to-live nobler. This defines the amount of time in which the data cache will remain valid.
Amazon CloudFront

  1. Fast, global, and secure CDN service
  2. Global network of edge locations and Regional edge Caches
  3. Self-service model
  4. Pay-as-you-go pricing

Amazon CloudFront Infrastructure
• Edge locations
• Multiple edge locations
• Regional edge caches
Edge Locations – Network of data center that CloudFront uses to serve popular content quickly to customers.
Regional edge Cache – CloudFront locations that caches contend that is not popular enough to stay at the edge location. It is located between the origin server and the global edge location.
Which AWS networking service enable a company to create a virtual network within AWS.
A. AWS Config
B. Amazon Route 53
C. AWS Direct Connect
D. Amazon VPC
Implementing a Cloud Deployment
Development – develops and test new services and apps
Production – manages live applications in use
Quality Assurance (QA) – Maintenance networks that test systems and services
Four main clouds available are:
a. Public – for the public but can be owned privately.
b. Private – the network is private
c. Hybrid – mixed o the two
d. Community a group based on a common interest.

.2.2 Analyze Sizing, Subnetting, and Basic Routing of a Deployment

Network components
Network components include network hubs, routers, switches, network cables, wireless access points, network servers, and Network Interface Cards (NIC).

Applicable port and protocol considerations when extending to the cloud
Most networks nowadays are based on TCP/IP. Numerous protocols and applications are transmitted over these networks. However, a few protocols are commonly used for access to email, sending files, some background functions, and a web server.
These common protocols are Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Hypertext Transfer Protocol Secure (HTTPS), File Transfer Protocol Secure (FTPS), Secure File Transfer Protocol (SFTP), Secure Shell (SSH), DNS, Dynamic Host Configuration Protocol (DHCP), and Simple Mail Transfer Protocol (SMTP).
On TCP/IP-based networks, quite a few applications have specific port numbers assigned to them. The port number of the destination port appears on the TCP header when an application requests access to a service on a remote server. The IP frame will then be transmitted to the remote server, which will look at the value of the destination port and send the data to the requested application.
These are the most common port numbers:
• Port 80 – HTTP
• Port 21 – FTP
• Port 22 – SSH, SCP, and SFTP
• Port 25 – SMTP
• Port 53 – DNS
• Port 443 – HTTPS
• Port 68 – DHCP

Determine configuration for the applicable platform as it applies to the network
Though the cloud service provider is the owner of the network in its data center, many cloud deployments allow customers to define their own configurations for their virtual private clouds on the provider’s network. The user is normally given access to dashboard controls via a web browser, a command-line interface, or APIs.
Network services that can be configured include access control, firewall rules pertaining to movement of traffic within and traffic coming into or out of your cloud deployment, security groups, maintenance of routing tables, load balancing, DNS services, caching systems, and data delivery.
The purpose of a Virtual Private Network (VPN) is to provide a secure encrypted connection over an unsecured public network. VPNs enable secure access to cloud services from a remote site. Businesses also use VPN to connect with each other over a public network instead of spending money on a private dedicated circuit.
Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) are solutions designed to monitor network traffic to identify any suspicious behavior in real time. These systems are programmed to recognize signatures that indicate an intrusion. It’s the responsibility of IDS/IPS vendors to ensure that the predefined rule sets, which identify unusual activity, are up-to-date. Users can configure an IDS to send email or text alerts or an alarm to their management systems whenever a network attack is detected. An IDS is only designed to monitor and report, not to neutralize the threat. An IPS, on the other hand, can prevent or defuse an attack by using methods such as configuring firewalls and routers to mitigate the incident.
A Demilitarized Zone (DMZ), or network security zone, refers to that segment of the network that hosts servers and other compute systems that users from the outside world need to gain access to via an internal network or the Internet. However, applications and web servers on the DMZ are protected by specific security measures and are not totally nor directly exposed to the Internet.
Specific firewall rules prevail in the DMZ to prevent unauthorized access to the internal network in the event that DMZ servers are infected. DMZ-specific firewall policies are also implemented to ensure that users don’t have full access to DMZ servers and can only access them for a specific application or system.
Traditional VLANs support a limited number of total VLANs: a maximum of 4094. Locally-scoped VLANs are not mobile outside their zone. Given the scale of cloud computing, cloud service providers serve thousands of customers. With VLANs, cloud providers wouldn’t be able to extend their network.
A Virtual Extensible LAN (VxLAN) has the capability to overcome these limitations. It works by encapsulation. It is designed to encapsulate an Ethernet frame in an IP packet and transport it using UDP. This has made it possible for VLANs to move across the network in a different way. This technology is also called MAC-in-IP encapsulation because the layer two frame isn’t touched.
The VxLAN Network Identifier (VNI) is capable of scaling to over 16 million segments. The VxLAN makes use of a VxLAN tunnel endpoint (VTEP), which is any endpoint that can handle VxLAN packets—including encapsulating them or removing encapsulation if it’s destined for a local host. After removing the encapsulation, VTEP switches the original frame normally.
Address space required
It’s important to have a precise network addressing plan when you undertake a cloud migration. Network addressing refers to the practice of segmenting TCP/IP networks on the basis of your existing and future needs.
A cloud service provider owns a block of IP addresses accessible to the public, which can be reached via the Internet. Your cloud provider will assign a certain number of IP addresses to your Internet-facing systems.
You have the option of using private IP address blocks, which are reserved in RFC 1918 and are not routable over the Internet for some of your cloud operations. Private addressing is useful for assigning addresses to endpoints that are not connected to the Internet. Depending on the options available from your cloud provider and your agreement with them, the provider may assign address blocks for your use, or they may let you select an address block that suits you.
Network segmentation and micro-segmentation
Normally, cloud service providers assign a large block of IP addresses to a customer. The customer has the flexibility to divide these into smaller subnetworks. Users will likely find it helpful to create as many subnetworks as they need and to assign a group of network segments or applications to each subnet. Creating several subnets gives users the benefit of being able to use security groups, firewalls, access control lists, and other network security methods to regulate traffic flowing in and out of each subnet.

Determine if cloud resources are consistent with the SLA and/or change management requirements
Management solutions are sometimes part of a cloud service provider’s offerings. When management solutions are provided, they are described in detail in the Service Level Agreement (SLA). Specific performance metrics and minimum availability level, as well as penalties for not meeting the defined metrics are outlined in the SLA. The SLA also details who owns the content, and who has what rights and which responsibilities. Many SLAs come with a severability clause, which includes different penalties including termination of the contract.
Cloud consumers need to understand that they are primarily responsible for the general management of their deployment. It is common for enterprises to opt for a shared management model with the cloud provider in charge of basic maintenance in the data center, and consumers assuming the responsibility of managing their network and applications.
It’s important for consumers to measure critical performance metrics before and after migration and on an ongoing basis. Performance can be measured against the benchmarks defined for several objects. Benchmark testing will help users identify potential problems and troubleshoot accordingly.

CPU and Memory Sizing
Available vs. proposed resources
Cloud providers have an extensive array of VM configurations to offer. Users can choose from a range of options including general compute, CPU, memory, database applications with high I/O requirements, and graphics. These configurations, or instances as they’re commonly termed, are usually available in prepackaged formats.
VMs are powered by physical servers. Hence, physical servers need more than adequate processing power to serve several VMs. Today, CPUs run on multicore processors and have very high processing capabilities.
When configuring a physical server, it’s essential to calculate the processing needs of all the VMs running on the server and to install enough CPUs to support all VMs being hosted. Since a server’s motherboard has several slots for multicore CPUs, it’s possible for a single-server platform to host numerous VMs.
VMs are designed to use the RAM on the host server. The amount of RAM you will need to deploy depends on the number of VMs and their individual configurations. It’s the cloud provider’s responsibility to ensure that the server has enough memory to serve the number of VMs hosted on that server. The RAM is located on the server’s motherboard.
It’s necessary to install more memory than what is currently required to provide for future expansion as well as to meet the requirements of the hypervisor. Nowadays, servers have the capacity to support very high memory density. When installing RAM, one must also take into account its error rectification capabilities and access speeds.

Memory technologies
Bursting and ballooning
Memory ballooning is a method that enables a hypervisor to retrieve a VM’s unused memory and reallocate the same for use elsewhere. By utilizing idle memory, the hypervisor can make optimum use of RAM.
As discussed earlier, a hypervisor is software positioned between the physical server and the VMs. The hypervisor is capable of limiting each VM’s access to hardware. This enables it to ensure balanced distribution of resources among VMs.
Overcommitment ratio
In a cloud server virtualization environment, hypervisors have the capability of overcommitting RAM. Overcommitting makes it possible for a VM running on the hypervisor to use memory in excess of what is installed on the motherboard of the server.
Normally, not all VMs use the memory allocated to them. The hypervisor reallocates unused memory to VMs that need more RAM.
Overcommitment can help you make optimum use of physical resources and lower operating costs.

CPU technologies
Hyper-threading refers to the process that enables a single microprocessor core to function like two distinct CPUs. Hyper-threading makes use of Intel’s multithreading technology.
This enables each virtual processor to function and be controlled independently.
It’s necessary for the operating system or hypervisor to have the capacity to support symmetrical multiprocessing to be capable of handling hyper-threading.
Intel Virtualization Technology (VT-x) is an extension that is used for hardware virtualization. VT-x extensions add powerful capabilities such as priority, memory handling, and migration to Intel processors.
It is necessary to enable VT-x in the system BIOS to enhance server performance.
Over-commitment ratio
CPU resources can also be overcommitted. This is known as the virtual CPU (vCPU) to physical CPU (pCPU) ratio. The hypervisor overcommits according to the processing requirements of the applications running on each VM. In the case of applications that are not CPU-intensive, a higher overcommitment ratio is set. For applications that require high processing power, a lower overcommitment ratio may be appropriate.
Since a hypervisor supports multiple VMs, there may be instances when VMs may have to wait until physical CPU resources are available. The hypervisor can pause a VM to enable other VMs to access CPU resources. There are monitoring tools available that can record and display CPU wait data.

Effect to HA/DR
Data centers utilize redundant systems to maintain High Availability (HA). These redundant systems are configured in such a way that one or more may be active, and another may be kept ready in reserve to be deployed immediately in case of failure. HA is advisable in the case of critical applications so that a failure does not cause a massive outage.
The cloud service provider needs to implement HA systems to fulfill customer expectations and honor SLA commitments.
In cloud computing, a range of Disaster Recovery (DR) models and technologies are available internally and externally to back up an enterprise data center if required.
Traditional DR architectures used by large organizations include hot, warm, and cold sites. These economical backup measures are now implemented in cloud data centers
Cloud service providers implement networks with built-in fault tolerance and DR.

Performance considerations
It is necessary to collect and record performance statistics for RAM, CPU, storage, I/O operations and any other important operations and to identify deviations from benchmarks. Keeping track of performance statistics also draws attention to any capacity shortfall. Many cloud providers’ service offerings come with cloud monitoring and management apps.

Cost considerations
Cloud computing helps businesses save on capital expenditure on hardware. Consumers can pay as they go, and they don’t need to purchase additional capacity in advance. Cloud technology makes it possible to provide additional capacity in a matter of minutes.
However, you as the consumer must manage your cloud resources efficiently. Some cloud service models are priced by hours of usage. Leaving multiple servers running when they’re not required or forgetting to reduce the scale of automation when workload decreases, can increase your costs significantly.

Energy savings
Energy-efficient data centers can be cost-effective for customers. Energy-saving methods enable these centers to save on power and incur lower operating costs. Hence, they can pass on the benefit of lower service charges to their customers.
Enterprise data centers often have idle servers running, unnecessarily consuming power. In cloud data centers, servers and other hardware are turned off by automated management systems when not in use.

Dedicated compute environment vs. shared compute environment
Most cloud consumers use the shared model because shared virtualized resources are cost-effective. Many cloud service providers also have dedicated servers offered at a much higher cost.
Customers usually opt for dedicated hardware resources because they either have special hardware needs, security rules or application restrictions.

Do you need help with this assignment or any other? We got you! Place your order and leave the rest to our experts.

Quality Guaranteed

Any Deadline

No Plagiarism