Fully Homomorphic Encryption and cryptography

Introduction

These five labels describe how the devices in a network are interconnected rather than their physical arrangement. For example, having a star topology does not mean that all of the computers in the network must be placed physically around a hub in a star shape. A consideration when choosing a topology is the relative status of the devices be linked. Two relationships are possible: peer-to-peer, where the devices share the link equally, and primary-secondary, where one device controls traffic and the others must transmit through it. Ring and mesh topologies are more convenient for peer-to-peer transmission, while star and tree are more convenient for primary-secondary, bus topology is equally convenient for either.

Mesh

In a mesh topology, every device has a dedicated point-to-point link to every other device. The term dedicated means that the link carries traffic only between the two devices it connects. A fully connected mesh network therefore has n*(n – l)/2 physical channels to link n devices. To accommodate that many links, every device on the network must have 7 input/output (I/O) ports.

tifsTmp9.tif

Figure (9) – Fully Connected Mesh Topology

A mesh offers several advantages over other network topologies. First, the use of dedicated links guarantees that each connection can carry its own data load, thus eliminating the traffic problems that can occur when links must be shared by multiple devices.

Second, a mesh topology is robust. If one link becomes unusable, it does not incapacitate the entire system.

Another advantage is privacy or security. When every message sent travels along dedicated line, only the intended recipient sees it. Physical boundaries prevent other users from gaining access to messages.

Finally, point-to-point links make fault identification and fault isolation easy. Traffic can be routed to avoid links with suspected problems. This facility enables the network manager to discover the precise location of the fault and aids in finding its cause and solution.

The main disadvantages of a mesh are related to the amount of cabling and the number of I/O ports required. First, because every device must be connected to ever other device, installation and reconfiguration are difficult. Second, the sheer bulk of the wiring can be greater than the available space (in walls, ceilings, or floors) can accommodate. And, finally, the hardware required connecting each link (I/O ports and cable can be prohibitively expensive). For these reasons a mesh topology is usually implemented in a limited fashion—for example, as a backbone connecting the main computers of a hybrid network that can include several other topologies.

Star

In a star topology, each device has a dedicated point-to-point link only to a central controller, usually called a hub. The devices are not directly linked to each other. Unlike a mesh topology, a star topology does not allow direct traffic between devices. The controller acts as an exchange. If one device wants to send data to another, it sends the data to the controller, which then relays the data to the other connected device.

HubtifsTmp10.tif

Figure (10) – Star topology

A star topology is less expensive than a mesh topology. In a star, each device needs only one link and one I/O port to connect it to any number of others. This factor also makes it easy to install and reconfigure. Far less cabling needs to be housed, and additions, moves, and deletions involve only one connection: between that device and the hub.

Other advantages include robustness. If one link fails, only that link is affected. All other links remain active. This factor also lends itself to easy fault identification and fault isolation. As long as the hub is working, it can be used to monitor link problems and bypass defective links.

However, although a star requires far less cable than a mesh, each node must be linked to a central hub. For this reason more cabling is required in a star than in some other topologies (such as tree, ring, or bus).

Tree

A tree topology is a variation of a star. As in a star, nodes in a tree are linked to a central hub that controls the traffic to the network. However, not every device plugs directly into the central hub. The majority of devices connect to a secondary hub that in turn is connected to the central hub.

The central hub in the tree is an active hub. An active hub contains a repeater, which is a hardware device that regenerates the received bit patterns before sending them out. Repeating strengthens trans- missions and increases the distance a signal can travel.

tifsTmp11.tif

Figure (11) – Tree Topology

The secondary hubs may be active or passive hubs. A passive hub provides a simple physical connection between the attached devices.

The advantages and disadvantages of a tree topology are generally the same as those of a star. The addition of secondary hubs, however, brings two further advantages. First, it allows more devices to be attached to a single central hub and can therefore increase the distance a signal can travel between devices. Second, it allows the network to isolate and prioritize communications from different computers. For example, the computers attached to one secondary hub can be given priority over computers attached to another secondary hub. In this way, the network designers and operator can guarantee that time-sensitive data will not have to wait for access to the network.

A good example of tree topology can be seen in cable TV technology where the main cable from the main office is divided into main branches and each branch is divided into smaller branches and so on. The hubs are used when a cable is divided.

Bus

The preceding examples all describe point-to-point configurations. A bus topology, on the other hand, is multipoint. One long cable acts as a backbone to link all the devices in the network.

Nodes are connected to the bus cable by drop lines and taps. A drop line is a connection running between the device and the main cable. A tap is a connector that either splices into the main cable or punctures the sheathing of a cable to create a contact with the metallic core. As a signal travels along the backbone, some of its energy is transformed into heat. Therefore, it becomes weaker and weaker the farther it has to travel. For this reason there is a limit on the number of taps a bus can support and on the distance between those taps.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

Advantages of a bus topology include ease of installation. Backbone cable can be laid along the most efficient path, then connected to the nodes by drop lines of various lengths. In this way, a bus uses less cabling than mesh, star, or tree topologies. In a star, for example, four network devices in the same room require four lengths of cable reaching all the way to the hub. In a bus, this redundancy is eliminated. Only the backbone cable stretches through the entire facility. Each drop line has to reach only as far as the nearest point on the backbone.

tifsTmp12.tif

Figure (12) – Bus Topology

Disadvantages include difficult reconfiguration and fault isolation. A bus is usually designed to be optimally efficient at installation. It can therefore be difficult to add new devices. As mentioned above, signal reflection at the taps can cause degradation in quality. This degradation can be controlled by limiting the number and spacing of devices connected to a given length of cable. Adding new devices may therefore require modification or replacement of the backbone.

In addition, a fault or break in the bus cable stops all transmission, even between devices on the same side of the problem. The damaged area reflects signals back in the direction of origin, creating noise in both directions.

Ring

In a ring topology, each device has a dedicated point-to-point line configuration only with the two devices on either side of it. A signal is passed along the ring in one direction, from device to device, until it reaches its destination. Each device in the ring incorporates a repeater. When a device receives a signal intended for another device, its repeater regenerates the bits and passes them along.

A ring is relatively easy to install and reconfigure. Each device is linked only to its immediate neighbors (either physically or logically). To add or delete a device requires moving only two connections. The only constraints are media and traffic considerations (maximum ring length and number of devices). In addition, fault isolation is simplified. Generally in a ring, a signal is circulating at all times. If one device does not receive a signal within a specified period, it can issue an alarm. The alarm alerts the network operator to the problem and its location.

However, unidirectional traffic can be a disadvantage. In a simple ring, a break in the ring (such as a disabled station) can disable the entire network. This weakness can be solved by using a dual ring or a switch capable of closing off the break.

tifsTemp 13.a.tif

Figure (13) – Ring Topology

OSI Model

This model is based on a proposal developed by the International Standards Organization (ISO) as a first step toward international standardization of the protocols used in the various layers. The model is called the ISO-OSI (Open Systems Interconnection) Reference Model because it deals with connecting open systems—that is, systems that are open for communication with other systems. We will usually just call it the OSI model for short.

The OSI model has seven layers. The principles that were applied to arrive at the seven layers are as follows

1. A layer should be created where a different level of abstraction is needed.

2. Each layer should perform a well-defined function.

3. The function of each layer should be chosen with an eye toward defining internationally standardized protocols.

4. The layer boundaries should be chosen to minimize the information flow across the interfaces.

5. The number of layers should be large enough that distinct functions need not be thrown together in the same layer out of necessity, and small enough that the architecture does not become unwieldy.

Below we will discuss each layer of the model in turn, starting at the bottom layer. Note that the OSI model itself is not network architecture because it does not specify the exact services and protocols to be used in each layer. It just tells what each layer should do. However, ISO has also produced standards for all the layers, although these are not part of the reference model itself. Each one has been published as a separate international standard.

tifsTmp2-a.tif

Figure (16) – The OSI Reference Model

The Physical Layer

The physical layer is concerned with transmitting raw bits over a communication channel. The design issues have to do with making sure that when one side sends a 1 bit, it is received by the other side as a 1 bit, not as a 0 bit. Typical questions here are how many volts should be used to represent a 1 and how many for a 0, how many microseconds a bit lasts, whether transmission may proceed simultaneously in both directions, how the initial connection is established and how it is torn down when both sides are finished, and how many pins the network connector has and what each pin is used for. The design issues here largely deal with mechanical, electrical, and procedural interfaces, and the physical transmission medium, which lies below the physical layer.

The Data Link Layer

The main task of the data link layer is to take a raw transmission facility and transform it into a line that appears free of undetected transmission errors to the network layer. It accomplishes this task by having the sender break the input data up into data frames (typically a few hundred or a few thousand bytes), transmit the frames sequentially, and process the acknowledgement frames sent back by the receiver. Since the physical layer merely accepts and transmits a stream of bits without any regard to meaning or structure, it is up to the data link layer to create and recognize frame boundaries. This can be accomplished by attaching special bit patterns to the beginning and end of the frame. If these bit patterns can accidentally occur in the data, special care must be taken to make sure these patterns are not incorrectly interpreted as frame delimiters.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

A noise burst on the line can destroy a frame completely. In this case, the data link layer software on the source machine can retransmit the frame. However, multiple transmissions of the same frame introduce the possibility of duplicate frames. A duplicate frame could be sent if the acknowledgement frame from the receiver back to the sender were lost. It is up to this layer to solve the problems caused by damaged, lost, and duplicate frames. The data link layer may offer several different service classes to the network layer, each of a different quality and with a different price.

Another issue that arises in the data link layer (and most of the higher layers is well) is how to keep a fast transmitter from drowning a slow receiver in data. Some traffic regulation mechanism must be employed to let the transmitter know how much buffer space the receiver has at the moment. Frequently, this flow regulation and the error handling are integrated.

If the line can be used to transmit data in both directions, this introduces a new complication that the data link layer software must deal with. The problem is that the acknowledgement frames for A to B traffic compete for the use of the line with data frames for the B to A traffic.

Broadcast networks have an additional issue in the data link layer to control access to the shared channel. A special, sub layer of the data link layer, the medium access sub layer, deals with this problem.

The Network Layer

The network layer is concerned with controlling the operation of the subnet. A key design issue is determining how packets are routed from source to destination. Routes can be based on static tables that are “wired into” the network and rarely changed. They can also be determined at the start of each conversation, for example a terminal session. Finally, they can be highly dynamic, being determined anew for each packet, to reflect the current network load.

If too many packets are present in the subnet at the same time, they will get in each other’s way, forming bottlenecks. The control of such congestion also belongs to the network layer.

Since the operators of the subnet may well expect remuneration for their efforts, there is often some accounting function built into the network layer. At the very least, the software must count how many packets or each customer sends characters or bits, to produce billing information. When a packet crosses a national border, with different rates on each side, the accounting can become complicated.

When a packet has to travel from one network to another to get to its destination, many problems can arise. The addressing used by the second network may be different from the first one. The second one may not accept the packet at all because it is too large. The protocols may differ, and so on. It is up to the network layer to overcome all these problems to allow heterogeneous networks to be interconnected.

In broadcast networks, the routing problem is simple, so the network layer is often thin or even nonexistent.

The Transport Layer

The basic function of the transport layer is to accept data from the session layer, split it up into smaller units if need be, pass these to the network layer, and ensure that the pieces all arrive correctly at the other end. Furthermore, all this must be done efficiently, and in a way that isolates the upper layers from the inevitable changes in the hardware technology.

Under normal conditions, the transport layer creates a distinct network connection for each transport connection required by the session layer. If the transport connection requires a high throughput, however, the transport layer might create multiple network connections, dividing the data among the network connections to improve throughput. On the other hand, if creating or maintaining a network connection is expensive, the transport layer might multiplex several transport connections onto the same network connection to reduce the cost. In all cases, the transport layer is required to make the multiplexing transparent to the session layer.

The transport layer also determines what type of service to provide the session layer, and ultimately, the users of the network. The most popular type of transport connection is an error-free point-to-point channel that delivers messages or bytes in the order in which they were sent. However, other possible kinds of transport service are transport of isolated messages with no guarantee about the order of delivery, and broadcasting of messages to multiple destinations. The type of service is determined when the connection is established.

The transport layer is a true end-to-end layer, from source to destination, in other words, a program on the source machine carries on a conversation with a similar program on the destination machine, using the message headers and control messages. In the lower layers, the protocols are between each machine and its immediate neighbors, and not by the ultimate source and destination machines, which may be separated by many routers. There is a difference between layers 1 through 3, which are chained, and layers 4 through 7, which are end-to-end. Many hosts are multi-programmed, which implies that multiple connections will be entering and leaving each host. Their needs to be some way to tell which message belong to which connection. The transport header is one place this information can be put.

In addition to multiplexing several message streams onto one channel, the transport layer must take care of establishing and deleting connections across the network. This requires some kind of naming mechanism, so that a process on one machine has a way of describing with whom it wishes to converse. There must also be a mechanism to regulate the flow of information, so that a fast host cannot overrun a slow one. Such a mechanism is called flow control and plays a key role in the transport layer (also in other layers). Flow control between hosts is distinct from flow control between routers, although we will later see that similar principles apply to both.

The Session Layer

The session layer allows users on different machines to establish sessions between them. A session allows ordinary data transport, as does the transport layer, but it also provides enhanced services useful in some applications. A session might be used to allow a user to log into a remote timesharing system or to transfer a file between two machines.

One of the services of the session layer is to manage dialogue control. Sessions can allow traffic to go in both directions at the same time, or in only one direction at a time. If traffic can only go one way at a time (analogous to a single railroad track), the session layer can help keep track of whose turn it is.

A related session service is token management. For some protocols, it is essential that both sides do not attempt the same operation at the same time. To manage these activities, the session layer provides tokens that can be exchanged. Only the side holding the token may perform the critical operation.

Another session service is synchronization. Consider the problems that might occur when trying to do a 2-hour file transfer between two machines with a 1-hour mean time between crashes. After each transfer was aborted, the whole transfer would have to start over again and would probably fail again the next time as well. To eliminate this problem, the session layer provides a way to insert checkpoints into the data stream, so that after a crash, only the data transferred after the last checkpoint have to be repeated.

The Presentation Layer

The presentation layer performs certain functions that are requested sufficiently often to warrant finding a general solution for them, rather than letting each user solve the problems. In particular, unlike all the lower layers, which are just interested in moving bits reliably from here to there, the presentation layer is concerned with the syntax and semantics of the information transmitted.

A typical example of a presentation service is encoding data in a standard agreed upon way. Most user programs do not exchange random binary bit strings. They exchange things such as people’s names, dates, amounts of money, and invoices. These items are represented as character strings, integers, floating-point numbers, and data structures composed of several simpler items. Different computers have different codes for representing character strings, integers, and so on. In order to make it possible for computers with different representations to communicate, the data structures to be exchanged can be defined in an abstract way, along with a standard encoding to be used “on the wire.” The presentation layer manages these abstract data structures and converts from the represe

Full Explanation Of Network Diagram

ntation used inside the computer to the network standard representation and back.

The Application Layer

The application layer contains a variety of protocols that are commonly needed. For example, there are hundreds of incompatible terminal types in the world. Consider, the plight of a full screen editor that is supposed to work over a network with many different terminal types, each with different screen layouts, escape sequences for inserting and deleting text, involving the cursor, etc.

One way to solve this problem is to define an abstract network virtual terminal that editors and other programs can be written to deal with. To handle

Network architecture is the plan of a connections network. It is a structure for the design of a networks physical mechanism and their functional association and design, its operational values and events, as well as data formats use in its function. In telecommunication, the plan of network architecture may also consist of a detailed report of products and services deliver via a communications network, as well as detail rate and billing structure under which services are compensated.

Reference

http://www. Wikipedia.com/architecture

Network architecture diagram

Figure-network architecture

Full explanation of network diagram

We designed in this company basic network architecture and we followed millstone for network architecture and all necessary information include now I describe in this architecture such as-

Workstation

Workstation is a design for professionally work in office. The company is an energy company this company’s customer uploads reading their payment in this company website but before times their payment report and upload file attack. But in the new network architecture security is very strong so workstation all work confidently is possible and it is saved from attack.

Reference

Own opinion

Router

Routers allow connectivity to one or more computers help generate a network. For home user, these are mostly useful for captivating a single broadband internet account and distribution it to at least two or more computers. Standard routers necessitate the internet connection from a standalone modem, but modem-routers are ever-increasing in popularity, which can be plugged into any broadband-enabled phone line, reducing cable clutter, and only taking up one power socket.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

The rules for handle traffic are an essential component of internet security. A home/office router may have rules preventive how computers outside the network can connect to computers inside the network, as well as prevent private network traffic from spill into the outside world. Many home routers include additional security features – they scan and filter all traffic that passes through them, frequently through an integrated firewall in the hardware. Some may carry out other useful roles such as acting as a print server.

Reference

http//www.misco.com//router

Switches

A switch is sometimes call an ‘intelligent hub’, A switch does the similar as a hub, in that it connect devices to allocate them to take action as a single segment. However, it does not automatically send traffic to each other port. Every time a frame of data comes into the switch, it saves the physical address (MAC address) and the port it came from in its MAC address table. It then checks the purpose MAC address in the table, and if it recognizes it sends the frame to the suitable port. If it is not in the table, or the address is a broadcast address then it does the similar as a hub and sends the frame through every port except the originate port.

Reference

http//www.misco.com//switches

Hubs

A hub is a device for connector multiple Ethernet devices typically PCs to form a single segment – a portion of a network that is divided from other parts of the network. It has multiple ports throughout which devices are linked, and when it receive data it sends it out again through every port except for the one it came in through.

A hub replace the cable, make sure that traffic is seen by each computer on the network, and enables the network to be connect in the form of a star before a bus using the familiar twisted pair Ethernet cable.

Reference

http//www.misco.com//hubs

Firewall

A firewall is an element of a computer system or network that is designed to block unauthorized access even as permit authorizes communications. It is a device or set of devices that is configured to permit or deny network transmissions based upon a set of rules and other criterion.

Firewalls can be implementing in any hardware or software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, in particular intranets. All messages entering or leaving the intranet pass through the firewall, which inspects each message and blocks those that do not meet the specific security criterion

Reference

http //www. Wikipedia.com//firewall

Demilitarized zone (DMZ)

In computer security, a DMZ, or else demilitarized zone are a physical or logical subnet work that contain and expose an organization external service to a larger untreated network, typically the Internet. The term is usually referred to as a DMZ by information technology professional. It is now and then referred to as a perimeter network. The function of a DMZ is to add an further layer of security to an organization local area network (LAN); an external foe only has access to equipment in the DMZ, before any other part of the network.

Diagram of a typical network employing

DMZ using a three-legged firewall

Reference

http //www. Wikipedia.com//DMZ

Honey pot

In computer terminology, a honey pot is a lock in set to detect, redirect, or in some manner counter attempt at unauthorized use of information systems. Usually it consists of a computer, data, or a network site that appears to be part of a network, but is actually isolated, (UN) protected, and monitored, and which seem to contain information or a resource of value to attacker.

Reference

http //www. Wikipedia.com//honey pot

Virtual private network (VPN)

A virtual private network (VPN) is a computer network that uses a public telecommunication infrastructure such as the Internet to provide remote offices or individual users with secure access to their organization network. It aims to avoid a luxurious system of own or lease lines that can be used by only one organization.

It encapsulate data transfer between two or more networked devices which are not on the same private network so as to be the transferred data private from other devices on one or more dominant local or wide area networks. There are many diverse classifications, implementations, and uses for VPNs.

Reference

http //www. Wikipedia.com/VPN

HIDS agent installed

Server agent use middle organization and multiple agents which are provide safety public and private among network hosts. It is advantage local installation on every host. HIDS agent server performs all log analysis then the agent connected to it. Active response are initiate from the server, but can be executed on an agent or all agents simultaneously

Reference

Own opinion

Internal NIDS sensor

Internal NIDS sensor is inserting into a network section so that the traffic that it is monitor must pass through the sensor. One way to achieve an Internal NIDS sensor is to combine NIDS sensor logic with another network device, such as a firewall or a LAN switch. This approach has the advantage that no additional separate hardware devices are needed; all that is required is NIDS sensor software. An alternative is a stand-alone internal NIDS sensor. The primary motivation for the use of inline sensors is to enable them to block an attack when one is detect. In this case the device is performing both intrusion detection and intrusion prevention functions.

Reference

http//www.blunet.net.cn.com

External NIDS sensor

External NIDS sensor monitors a copy of network traffic; the real traffic does not pass through the device. From the point of view of traffic flow. The sensors connect to the network transmission medium, such as a fiber optic cable, by a direct physical tap. The taps provide the sensor with a copy of all network traffic being carried by the medium. The network interface card (NIC) for this tap usually does not have an IP address configured for it. All traffic into this NIC is just collected with no protocol interface with the network.

Reference

http//www.blunet.net.cn.com

Server and database seFront Office Is The Nerve Center Information Technology Essayrver

The network architecture main important part is server in this server use for this company. Company internal or external all important information will save in server and the server client all request respond and work station all employee in this company all details handle in this network architecture by server.

Database server is very important for this company because the company provides their customer upload and reading and makes payment. customer details save database server for future.

Reference

Own opinion

Share this: Facebook  Twitter  Reddit  LinkedIn  WhatsApp
The front office is the nerve center of the hotel and, as such, is an excellent place in which to gain a detailed understanding of how a modern lodging establishment operates. A position in the front office is an ideal launching pad for future advancement in the hotel industry. Many executive directors, sales executives, banquet managers, and other hotel executives began their careers in the front office.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service
The front office is responsible for greeting guests, managing rooms, and handling complaints. Many members of the hotel staff work behind the scenes and rarely, if ever, have any personal contact with guests. In contrast, the front office staff performs its job before the public, like actors on a stage. Clients form their first and, sometimes, most-lasting impressions of the hotel based on their experience with the men and women of the front office.

Answer

The front office staff is the public’s main contact with the hotel. The staff members handle reservations, greet guests on arrival, register new guests, dispense keys, handle incoming and outgoing mail, take messages for guests, provide information, listen to complaints, and handle check-out procedures when guests depart. The following personnel may be employed in the front office are front office manager, assistant front office manager, front desk representatives, night auditor, cashiers, reservationists and telephone operator.

The Front Office Manager

The front manager has a wide range of responsibilities. A front office manager must maintain a high level of efficiency among the front office staff, make effective decisions regarding reservation policies and room assignments, and handle guest problems and complaints with courtesy and tact. Besides, the manager must maintain an open communication channel with all the other departments of the hotel.

The front office manager assigns duties to staff members, prepares weekly work schedules and shift assignments, and holds regularly scheduled staff meetings to ensure that staff members understand and adhere to hotel policies and operating procedures. Moreover, the manager may also be responsible for hiring and training new employees, and for periodically reviewing the performance of each staff member.

Furthermore, it is the duty of a typical front office manager to define reservation policies and set quotas, with the goal of maintaining maximum room occupancy. The manager must continually monitor arrivals, departures, and cancellations and be responsible for setting policies regarding no-shows, early arrivals, and overbookings.

The front office manager is usually responsible for dealing with the client and taking the corrective action when special guests needs, problems or complaints arise. Other guest communications duties may include providing information on hotel policies, facilities and services, and welcoming important guests.

In addition, the front office manager confers regularly with the sales and marketing department for updates on special group reservations, billing arrangements, potential peak periods, and general forecasts. Besides, the manager must also maintain close communication with the housekeeping department about room status and check regularly with the accounting department for information about special billing requirements or problems. The front office manager must prepare regular written reports on the activities and progress of the front office for review by the executive director or assistant director.

The Assistant Front Office Manager

The assistant front office manager is responsible for coordinating front desk operations. He or she may train new front desk personnel, monitor guest accounts and payments and authorize checks and special procedures. Besides, the assistant front office manager may assist the front desk staff during periods of peak activity.

Other duties of an assistant manager include reviewing reservations for the current day and preparing daily room occupancy forecasts.

Front Desk Representatives

The front desk representatives convey the personality of the hotel to guests more than any other staff members in the hotel. Front desk representatives are responsible to make guests feel welcome and to effect an immediate response to problem or complaints. In addition to working directly with clients of the hotel, front desk representatives have an important role in assigning rooms and maintaining maximum occupancy.

Besides, the front desk representative is responsible for verifying reservation information, checking credit identification and authorization, assigning rooms and dispensing room keys when guests arrive at the hotel. It is also the responsibility of the front desk representative to notify the bell captain or summon a bell attendant to transport guest luggage.

Other duties include providing information about facilities and policies and handling special guest requests, such as photocopies, gift purchases and so forth. They may also be required to handle telephone calls and reservation requests, or to direct calls to the reservations department or switchboard. Guest communications duties include stamping and sorting guest mail, taking messages for guests and sending fax or telex documents.

When a guest is ready to depart, the front desk representative summons a bell attendant to transport guest luggage to the lobby and prepares, verifies and arranges the guest check. Other duties include checking room at the beginning of the shift, reviewing reservations for the current date and communicating with the housekeeping department regarding occupancy forecasts and room needs.

The Night Auditor

A night auditor has a dual role. First, he or she must perform the duties of a front desk representative at night. Second, the night auditor has an important bookkeeping function to perform which is preparing the machine balance report. Typically, a night auditor’s shift runs from 11 p.m to 7 a.m. When the front desk activity slackens, the night auditor begins to audit, or trace the posting of, the previous day’s transactions to verify their accuracy.

The night auditor calculates the total charges owed to the hotel and incurred by guests during the previous business day. Total payments received from guests during day are subtracted from the total charges to determine the daily balance. The balance represents the amount that is still owed to the hotel for the previous day’s transactions.

Cashiers

A front desk cashier is responsible for checking out departing clients, posting charges, verifying the guest check and handling payments. Cashiers may also be required to assist other members of the front desk staff in the performance of their duties.

At some properties, front desk representatives handle cashier functions as well as guest registration. But at other properties, the front office staff may be more highly specialized. For example, front desk receptionists may be responsible for greeting arriving guests, checking reservations and registering guests while cashiers are responsible for receiving payments on check-out.

Besides, the cashier may be responsible for calculating the charges and presenting the guest check.

Front desk receptionists are supervised by the front office manager or assistant front office manager. Although cashiers work at the front desk, they are normally considered to be members of the accounting department and therefore, work under the supervision of the accounting manager.

In addition, the cashier is responsible to perform routine front office duties such as sorting mail, handling guest communications and coordinating room status reports.

Front Office Organizational Chart
Front Office organization chart adapted from a small sized hotel:

Front Office Manager

Night Auditor

Guest Service

Reservation

Front Desk

Front Office organization chart adapted from a medium sized hotel :

Front Office Manager

Front Desk Representative e

Reservation Manager

Night Auditor

Concierge

Telephone Operator

Cashier

Elevator Operator

Bell Staff

Room Key Clerk

Front Office organization chart adapted from a large sized hotel :

Front Office Manager

Telephone Operator

Duty Manager

Night Manager

Reservation Manager

Telephone Operator Supervisor

Front Office Manager Secretary

GSO Supervisor

Group Coordinator

Guest Service Officer

Bell Captain

Airport Representative

Telephone Operator SPV

Telephone Operator SPV

Conclusion

Interdepartmental cooperation must be stressed during the introduction to the front office. This is an ideal time to establish the importance of harmony among the housekeeping, maintenance, marketing and sales, food and beverage and front office departments. The front office must take the lead in establishing good communications among departments. Because of front office is the initial contact for guest, obtaining status reports, maintaining communications and knowing the functions being hosted each day are the responsibilities of the front office staff. Overlooking trivial misunderstandings with other departments sometimes takes colossal effort, but the front office must keep the communication lines open. This is because guests benefit from and appreciate the work of a well-informed front office.

Question 2

Front office staffs must have certain skills to attract guests during the first impression. Write about Front Office staff skill in guest relations.

Introduction

In the client’s mind, the character and competence of the entire hotel are reflected in the personality of the front office staff. The people of the front office may be the client’s first and last contact with the hotel. For arriving guests, their behavior sets the tone for the entire stay. For departing clients, their final words create lasting impressions.

It is the staff’s responsibility to create a sense of belonging. Clients must be made to feel as if they are part of a family. The front desk staff must convey the impression that it is not there just to sell rooms, but rather to make guest’s stay at the hotel as enjoyable as possible.

A skilled front desk representative shows personal respect for every client and genuine concern for his or her needs. Clients must be made to believe in the staff’s reliability and willingness to serve. An open mind and a friendly attitude are indispensable traits of a front desk representative.

Answer

Communication

First impressions are only part of creating a positive relationship between hotel and client. No matter how favorably someone responds to strangers, communication and understanding must also take place.

The way that a front desk representative addresses guests creates the impression of respect and concern that the hotel has for its clients. It is important to address clients as they wish to be addressed. For example, a client named William Jones might be addressed by his friends as Will, Willie, Bill, or Billy, but the hotel staff should always address him as Mr. Jones.

Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services
When a client first arrives, the front desk representative should seek to establish a “comfort zone” in which the client feels at ease. A simple question like “How was your flight?” or a comment like “What a beautiful dress!” can establish instant rapport. A courteous staff member is a good listener as well as an efficient communicator. Asking questions indicates that the representative is interested in the client. Listening to the answers indicates personal respect and attention to the guest’s needs.

First impressions count more than anything that follows. Front office workers including receptionists, telephone operators, and sales assistants, are the first people that a customer or visitor see, or speak to and they form the impression that anyone gets of the company and that is why front office workers must display professional and ethical standards in their duties.

All front office workers should be welcoming and approachable. They should be unfailingly polite and courteous to all.

Telephone operators have only their voices to project their personality and hence it is more difficult for them to sound welcoming but smiling as they answer helps. Telephone operators also need to remember that callers can hear background conservation and whispers. They should not leave callers hanging onto a line but advise them every 20 seconds and if the wait is too long, ask if they can connect the caller to someone else.

Besides that, front office staffs need to know that clients are individuals and can be angry, irritable, rude or upset and their problem may not be with the worker at all but with the situation in which they find themselves. Thus, receptionists, telephone operator and sales assistants need to know how to calm and comfort their clients.

Front office staffs must know is not good ethics to ignore the customer and leave him waiting unattended. Customers should be welcomed in the premises as soon as possible and it is especially irritating to have to wait and listen to front office staff chatting about their personal business between them or on the telephone, while the client is left waiting.

A company’s front office workers are its public face and they need particular qualities. Telephone operators, sales assistants, and receptionist need to be professional and to demonstrate the highest ethical standards in their work. They need good manners, discretion, tact and the ability to like and understand people and their feelings. Front office workers are special people with very special qualities.

Behavior

Behavior describes how a person behaves to towards another person. For example, as a receptionist, speaking a phase in a correctly and punctually according to the situation tone is a must. As a receptionist, it is a must for them to get the satisfaction from the guest.

Communication with guest with a warmth and welcome smiling and provide efficiency service to the guest are very useful in order to achieve the satisfaction of a guest towards to lodging establishment. In addition, receptionist should always show respect, appreciating every time when they are dealing with the guests even when there are busy in doing other tasks, they should leave it and always serve to the guests’ needs and wants. As a professional receptionist, formal and neatly attire, clean nails and neat hairstyle must be maintain. This is how behavior playing a role in a front office staff and how are them to show a good impression to the guests and towards the satisfaction of the guests.

Self -presentation

This covers interpersonal of a receptionist example like their dressing and grooming. Dressing and grooming playing an important role in being a professional staff. This is because it always shows the personality of the receptionist to the guests besides than to provide the best impression to the guests and to gain recognition by guests according to uniform for quick problem solving by guests.

Position

Where the front office staffs stand is important, not only in relation to equipment such as desk, but also relation to the people they are dealing with. Each person has their own area of ‘personal space’ or privacy and any invasion of them by a stranger makes people feel uncomfortable.

Slapping the registration form on the table and leaning forward to watch as the guest is filing the form is one bad example of personal space invasion. This kind of invasion makes guests feel uncomfortable thus it should be avoided.

Posture

Posture covers how receptionists stand or sit in relation to guests. Facing somebody usually indicates interesting and leaning forward shows even greater interest unless it involves an invasion of personal space.

The receptionist should never keep arms folded defensively because her posture conveys the impression that she is not very anxious to help or assist the guests.

Gesture

Gesture is closely linked with posture and it covers the way people send signals by moving parts of their bodies, mainly hands, arms and shoulders.

A shrug should not be practiced by the front office staffs because it can very irritating as it suggests that the person is unconsciously trying to avoid the guest problems or complaints.

Therefore, the front staffs should always welcome their guests by opening their palms or a handshake as a sign of friendship and honesty. They should try not to make too many gestures as some of the gesture will cause negative or bad impression towards the front office staff and to the hotel itself. For example, hand-to-face gestures like touching one’s mouth o nose indicates a degree of deceit or negative feeling such as doubt apprehension or uncertainty.

Expression

Every guest has the ability to read whether the front office staff is friendly or not by looking at her expression.

Smiles are an important facial expression. They show interest, excitement, empathy, concern; they create an upbeat, positive environment.

Thus, it is a must to show a positive expression to the guests as a front office staff. For example, the front office staffs must serve the guests with a smile their bright smiles to show friendliness warmth and respect.

Eye contact

The front office staff should always maintain positive eye contact with guests to show sincerity and commitment of a front office staff towards their job.

Besides that, guests tend to trust and show more confidence towards the front office staff who displays her sincere and positive eye contact.

In addition, not only does focused eye contact display confidence of the front office, it also helps the guest to understand the staff is really paying attention into the conversation.

Dealing with complaints

No matter how well employees of the hotel perform their jobs, the front office staff will inevitably have to deal with clients who have complaints. Unfortunately, people who are displeased with a product are more vocal than customers who are satisfied. Clients communicate their complaints not only to the front desk staff but also coworkers, business associates, and other guests. The inability to handle complaints effectively can be a public relations disaster for a hotel.

A positive attitude makes it easier to deal with guests who have complaints. Problems should be viewed as opportunities rather than causes for panic. By resolving a problem, the staff can earn the client’s short-term respect and long-term business.

The front office staff should never be defensive when dealing with clients who have complaints. The front office staff must always remember that they are probably not the person responsible for creating the problem. To learn why the initial reaction is important, the front office staffs need to allow themselves to see or experience clients’ situation from their point of view.

A part from that, the front office staff must be in full possession of all the facts. They need to ask guest to describe the problem with as much detail as is necessary. This is because asking question indicates their interest in guests and concern for their welfare.

The front staff must never dismiss the complaint or take it lightly but to validate the client’s feelings by responding with a statement that reinforces rather than intimidates.

Besides that, the front office staff needs to listen carefully to the client’s description of the problem, and then paraphrase it to indicate that they understand. The front office staff also must promise to take action. They need to check into the circumstances, notify the appropriate department, and promptly report back to their client. Above all, they must never drop the issue and hope that the client forgets about it.

The front office staff spends a great deal of its time on the telephone, communicating with people both inside and outside the hotel. Despite the element of convenience, the telephone strips them of the advantage of using facial expressions, gestures and eye contact. Instead relying on their personal appearance and facial expressions, they must rely on their voice to convey subtle, as well as apparent, messages,

The following are the guidelines to develop an efficient and courteous telephone manner.

Firstly, the front office staff must always be prepared and answer promptly by the third ring if possible.

Secondly, the front office staff needs to use proper identification when answering the phone. Moreover, it saves time if the front office staff takes action to the purpose of the call.

Next, the front office staff must be able to speak directly into the telephone. They must never chew gum, smoke or do anything else that interferes with clear speech.

Besides that, the front office must be active listener and limit their talking. They should be able to mentally shut out all distractions and focus on what the caller is saying.

In addition, the front office needs to check back every minute or so to reassure the guest he or she has not been forgotten if it is necessary to place a guest on hold. They should thank the caller for waiting and explain the reason for the delay when they return to the line.

Lastly, they are to end the call courteously by the wishing the caller a pleasant day.

Conclusion

A client’s first impression of an establishment and is made upon entering the company’s premises. Front office operations together with the appearance of the office, are a direct reflection of the establishment’s practices, therefore staff should be well able to display certain ethics in their duties.

Front office staff does not

Fraud And Forensic Auditing Information Technology Essay

only refer to the receptionists but front office managers, clerks and telephone operators who are stationed at the front office are also a part of the front office workforce. If a client or a prospective one is greeted by a courteous and attentive staff, the first impression he will get is that the company is a smooth running business with well trained staff. This is much more likely to draw the client to the company and ensure that he will want to have business dealings with the company.

Besides that, the front office staff is often confronted with irritated or angry customers. They must be trained on how to deal with them and on how to calm them down. If an angry customer is served by a cheeky employee, it might well mean the end of business with this client and he may even go as far as giving the company a bad reputation.

In conclusion, the front office staff must have a certain standard o

This paper discusses fraud and forensic auditing, and in particular, how it affects information technology. The three main elements necessary in order to create fraud are pressure, rationalization, and opportunity. Financial fraud is a dynamic, ever changing market that changes every day with increases in new technologies, resulting in the need for computer forensics to reconstruct events and completely analyze all electronic evidence to provide accurate documentation and preserve the integrity of the data. Advanced education and training are expanding as the field of Forensic Auditing and Accounting continues to grow.

Introduction

Fraud and forensic auditing is becoming

more relevant in business today. Fraud has always been a business concern, but as information technology becomes more prevalent, so will the need for more advanced fraud and forensic auditing techniques. Formal fraud and forensic auditing emerged during the 1970’s and 1980’s with the onset of more computerized business technologies. Early techniques began with an administrator watching information systems for ‘red-flags’ and has evolved into complex, computerized software. This software has created the idea of computer forensics, which is the combination of computer science and law to accurately analyze information and generate evidence that can be used to accuse or defend. As fraud and forensic auditing is expanding, the need for teaching and training in higher education accounting programs is increasing. Not only are new programs developing for forensic auditing, but also financial auditing programs are expanding their material to cover fraud and forensic auditing techniques.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

Overview of Fraud

The Fraud Triangle

Fraud has become an important topic in today’s business environment, especially in the light of scandals such as Enron and WorldCom. While many think of top corporate executives committing fraudulent acts, especially those considering financial reporting, it is important to note that lower level employees also add to the risk of fraud within a company. The fraud triangle shows the three main elements necessary in order to create fraud: pressure, rationalization, and opportunity [1].

Financial pressure is often the first reason someone within a corporation would want to commit fraud. This could take the form of a lower level employee who finds himself in a difficult personal situation and believes that he would only benefit from stealing from the company [2]. Alternatively, an employee may commit fraud because they believe that the company or their job could be in jeopardy if they do not meet their stated financial goals for the period. In either situation, the employee could be a normally moral person, but felt pressure to meet personal or company expectations [3].

The second factor necessary for fraud is rationalization. This is where the employee justifies to himself that committing the fraud is not as bad as it seems. Often an employee will rationalize by saying that the money will be paid back if it is stolen, or that the company could make up the losses later, if the issue involves financial reporting [2].

Finally, there must be an opportunity for the perpetrator to commit the fraud. This is usually made possible by weak internal controls in the company and weak tone at the top regarding the ethical responsibility of employees. This is the most important aspect of the fraud triangle because without opportunity, the employee would have no way of actually carrying out the fraudulent activity [3].

There are three main ways that companies can mitigate the risks associated with fraud. First, establish a firm tone at the top, ensuring that all employees understand and devote time and effort to their ethical responsibilities. Management’s view of ethics is of the utmost importance. Secondly, auditors should view companies with a healthy sense of skepticism in all work performed, keeping in mind that anyone can commit fraud. Finally, the company should make sure that there is plenty of communication at all levels of the supply chain [4].

Information Technology’s Impact on Fraud

Information technology can be both the cause to fraud and the solution to fraud. Auditors can use information technology to assist in an audit, while a company can use computer software in order to make records easier to collect.

Information technology is becoming an increasingly more important part of a company’s business strategy. While many companies are counting on information technology to curb fraud, it also increases some risks. The use of information technology can lead to unauthorized access to important company data and information. This could result in someone having the ability to record nonexistence transactions or change information that has already been updated in the system. Additionally, because the use of information technology in accounting systems may require technical expertise in using the system, it is easier for those who do know how to use the systems to change the controls or programs. These increased risks can increase fraud in the financial statements. With the use of information technology, auditors must be more alert to the implications of information technology on the risk of fraud associated with the audit [5].

Additionally, cybercrime is of particular concern to companies that use the internet for any part of their operations. Many cybercriminals are able to combine their computer skills with social engineering in order to access critical company information and personal customer data. Hacking techniques, such as phishing, are becoming more of a problem. Hackers are keeping up with cyber security, and organizations must ensure that they are aware of what is going on in regards to their computer systems. In a business environment that increasingly relies more on information technology to assist operations, it is becoming more important for management and auditors to be aware of any technological changes made to systems in order to keep track of any issues that could result in fraudulent financial reporting [6].

Responsibility for Fraud

According to auditing standards, auditors are not responsible for making assertions on fraud, but rather are responsible for determining whether or not the financial statements are free of material misstatement. Therefore, the responsibility of fraud lies in the hands of a company’s management [5].

The Public Company Accounting Reform and Investor Protection Act of 2002 (better known as the Sarbanes-Oxley Act) was put into effect partially to restore investor confidence in financial statements after a series of fraudulent financial reporting incidents [4]. This Act made management much more responsible for fraud in financial statements than ever before. Section 303 of this act requires that the Chief Financial Officer and Chief Executive Officer of the company sign off on the final statements and verify that they are valid. Ultimately, management will be responsible for fraud, but there is always the argument that if fraud is found that someone will seek compensation from the auditors [7].

Why Fraud and Forensic Auditing?

History of Fraud and Forensic Auditing

Now that we have a basic understanding of what fraud and forensic auditing is, it is important for us to examine the history of the field. Fraud and forensic auditing emerged during the 1970s and 1980s with the explosion of technology-based business functions. As we know all too well, technology can increase efficiency, while simultaneously increasing risks to security and fraud. Also during this time, concerns about fraud, government waste, and crime (white-collar and blue-collar) were being plastered on the news. Therefore, it was quickly apparent that businesses needed some form of intrusion detection systems to manage the risks of inappropriate activities, thus leading to the discipline of fraud and forensic auditing [7].

This new form of auditing goes beyond government regulations and is designed to be used in litigation for claims of insurance, bankruptcy, embezzlement, computer fraud, and other related crimes. Computer crimes and financial fraud are carefully calculated, intuitive attacks by criminals. Therefore, fraud and forensic auditing requires more than just a basic set of standards; it requires intuition. Because fraud is often detected by accident, fraud auditors have developed a set of “scenarios” to learn to be proactive and think like a criminal. Jack Bologna, president of Computer Protection Systems, Inc. in 1984, stated that the best training for fraud auditors was on-the-job training. Bologna went on to say that because of the great degree of variability in fraud there is no clear way to learn everything in the classroom, although fraud auditors must have a basic understanding of accounting and auditing. Thus, the best experience comes from working in the field [7].

Fraud and forensic auditing is a dynamic and rapidly changing discipline. The first fraud and forensic auditing tools (referred to as intrusion detection systems) involved systems administrators watching a computer console to monitor user’s actions. The goal of these intrusion detection systems was to detect unauthorized or illegal use of the systems. Systems administrators looked for “red-flags” on the system, such as, vacationing employees remotely logging in to the system or a seldom-used computer component suddenly being turned on for no apparent reason. The results of these early intrusion detection devices were logged on sheets of folded computer paper that were subsequently stacked several feet high by the end of each week. The systems administrators were then faced with the daunting task of filtering through these stacks of information to find potential fraud. Although the goal of this system was to detect fraud and improper or illegal use of the systems, it was more reactive than proactive. The approach was slow and complex with the detection system logs run at night and not examined until the next day. Therefore, most intrusions were not detected until after they had already occurred. However, in the 1990s, real-time intrusion detection scanners were introduced allowing systems administrators a better opportunity to review systems information as it was produced and the ability to respond in real-time. This much more proactive approach increased the effectiveness of the intrusion detection systems, and in some cases, allowed administrators the ability to attack preemption [8].

However, as the intrusion detection systems evolved, so have the types of fraud. Currently, the Securities and Exchange Commission hear over 100 cases of financial fraud and accounting cases per year, which is a stark increase before the explosion of technology in business before the 1970s. In some cases, big named companies, such as, Bausch and Lomb, Sunbeam, and Knowledgeware have had to restate financial reports due to fraud. This in turn affects stock prices, and often leads to bankruptcy, changes in ownership, and layoffs, among other problems. In terms of financial fraud cases, however, only about 2% make it to trial, 20% are dismissed; the remainder are settled out of court. Prosecution is costly both to the government and to investors and company employees. Nevertheless, as economic times worsen, as we have seen in recent years, the number and variety of fraud cases increases. Financial fraud is a dynamic, ever changing market that changes every day with increases in new technologies [9].

In order to keep pace with the demand for fraud detection systems, fraud and forensic auditors are being held responsible for the increase in the detection of fraud. However, as Jack Bologna discussed, most fraud detection systems cannot be learned in a classroom, but rather must be learned on-the-job [7]. Following this concept, most universities today still lack curriculum in financial fraud detection. Although, the demand for auditors trained in fraud detection is increasing at a rapid pace as the incidence and variety of fraud increases. With the dynamic fraud environment, accountants and auditors alike must stay up-to-date on fraud detection so that auditing programs are adequately designed to meet the changing needs of forensic auditing. Therefore, as most would agree, auditors must balance education and training to provide the best defense to combat financial fraud [9].

How is Fraud and Forensic Auditing Different from a Traditional Audit?

With the development of the Sarbanes-Oxley Act of 2002, the auditing and accounting world was turned on its head. The Sarbanes-Oxley Act was a game-changer in fraud detection. Prior to the Act, auditing firms were primarily self-regulated, which proved to be problematic [10]. Firms, such as Arthur Anderson, showed a lack of integrity conspired to commit fraud right along with the fraudulent companies. Therefore, Sarbanes-Oxley created the Public Company Oversight Board (PCAOB) to provide more oversight and regulation to the accounting profession. In 2004, fraud cost the United States economy $684 billion, twenty times the cost of standard street crime, further illustrating the importance of a strong fraud detection system [11].

Although it may seem that fraud and forensic auditing are virtually the same as a regular audit, there are some differences. Both fraud and forensic audits and regular financial audits share the goal of detecting material misrepresentation of the financial statements; however, fraud and forensic auditing takes auditing a step further. Fraud and forensic audits are subject to stricter guidelines and rules and are primarily concerned with internal controls. They examine audit trails for variances or deviations in strong internal control. Fraud and forensic auditors are often described as one part accountant, one part lawyer, one part detective, and entirely professional. These auditors must be able to prove all their findings. Fraud and forensic auditors rely on the use of methodology tables to show flows of transactions and examine deviations. They must have so much detail, because they have the burden of proof to provide evidence to juries of non-accountants. Therefore, the evidence must be outlined in lay terms and must be beyond a reasonable doubt [11].

Even though there are differences between a traditional audit and a fraud or forensic audit, the fraud and forensic auditor’s work can greatly help financial accountants and auditors with their tasks. Sarbanes-Oxley Section 404 requires top management to sign-off and be responsible for all financial information, including internal control for their company. To the benefit of traditional auditors, fraud and forensic audits guarantee the application of Section 404. Because fraud and forensic auditors guarantee such levels of detail in internal controls, financial auditors can more easily understand the entity’s internal control structure and better design audit procedures to detect risk of material misstatement in the financial statements. This greatly decreases the amount of time in planning the audit and allows the financial auditors more time to design further audit procedures that are more responsible to assess the risk of material misstatement [11].

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

Computers and Forensic auditing

Role of Computer Forensics

Due to the increase of potential fraud, especially with computers being used by individuals and in every company on a day-to-day basis, forensic auditing and accounting has become an important aspect in addressing these challenges. One way of quickly and easily handling fraud and abuse cases is through computer matching and other various computer technologies and techniques. And, considering that computers and online use contribute in some way to almost every kind of criminal activity existing today, the information found is the key to the identification of the criminals behind these fraud activities [12].

Computer forensics is the main source of examining evidence during investigations because anything done on a network can be tracked and vital information can be captured. It is the idea of reconstructing events and completely analyzing all electronic evidence to provide accurate documentation and preserve the integrity of the data to effectively accuse or defend in a court of law. If computer forensics is not utilized correctly, then any information found may not be admissible in court. This means that law enforcement officers must have a general understanding of computer forensics in order to properly utilize evidence and better understand how to recognize and handle information a computer could potentially have to aid in criminal investigations [12, 13].

Two typical aspects of computer forensics are to understand the potential evidence they are looking for and to select the appropriate tools. Crimes involving a computer can range from identity theft to destruction of intellectual property, so it is important to know what kind of evidence to be looking for in the investigation. To prevent any further damage to the files, it is important to know how to recover the information that may have been deleted or tampered with by a criminal [13].

A forensic auditor’s tool kit will consist of a variety of tools and programs necessary for recovering data, disassembling a computer case, or taking images. Some examples of tools in the toolkit include physical tools of a screwdriver and pliers, archive media, and a digital camera and software and applications including disk wiping, disk imaging, hash calculations, search utilities, file and data recovery, file viewing, and password cracking [2].

A screwdriver and pliers are used when having to disassemble the computer case to access the hard drive. A type of archive media, recordable DVD or CD-ROM, is used to copy and store the contents of the hard drive and a digital camera will be needed to save images of the physical structure of the computer and anything that may need to be captured. Looking at the applications and software, disk wiping ensures the hard drives are cleaned and overwritten with binary information while disk imaging creates a bit-stream backup maintaining the hard drive’s information. Hash calculations are used to verify that the source and destination files have the same 32-bit hash value. Auditors then search for text strings and use EnCase to recover and view files and data [2].

Two applications, digital analysis and data query models, have the specific purpose of detecting fraud. Digital analysis uses Benford’s Law, which is an exponential distribution based on the first digit of naturally occurring numbers that do not occur in a set pattern. Phone numbers and zip codes do have a pattern and therefore cannot be used; however, invoice amounts and compound interest do not have recurring patterns and could be used. Benford’s Law helps IT auditors detect fraud by comparing the expected frequency distributions with the observed frequency distributions. Data query models compare computer assisted audit technique results with other evidence obtained during the audit, making sure that the evidence makes sense and supports assertions made [2].

Not only can computer forensics be used to accuse criminals, but it can also be used to uncover evidence believed to have been deleted in cases such as the Enron scandal. Despite the efforts of employees and several financial institutions to mislead investors, internal e-mails, thought to have been deleted, led to suspicions of loans and therefore the investigations of the real numbers Enron should have been reporting. Anything saved, opened or viewed on a computer is permanently recorded somewhere. Unless it is properly overwritten, it is capable of being found and restored. When McKesson, Inc. acquired HBO & Inc. and company auditors found irregularities in their accounting documents. An in depth audit using computer forensic tools recovered several deleted emails and files removed that were part of an effort to hide HBO’s falsification of their books [14, 15].

Future of Computer Forensic Technology

Cyber-forensics is becoming more important and will be extremely important in the near future because computers and the web are the fastest growing technology tools used by criminals. These cybercrimes and white collar crimes have become popular among criminals because of the high profit yields and low risk of conviction and sentencing if caught. Computer forensics will soon be as essential as an officer’s handcuffs or radio. The fact is that so many forms of communication, banking, shopping, and social networking take place online, so naturally, it has become the perfect place for criminals to be involved [13].

Another upcoming use of information technology is the application of business intelligence with computers. Business intelligence is a way of extracting information and analyzing it through various tools. The information that is analyzed helps detect fraud through the use of patterns and acts as a guide to investigations [16].

Investigations are still led by police officers and investigators, but the use of computers and computer technologies aid procedures and allow for more in depth searches, the ability to analyze relevant information, and provide the capability of tracing or retrieving documents from computer networks considering that much of fraud created today stems from online activity [16].

Future of Forensic Auditing

Trends in Forensic Auditing

Forensic auditing increased its presence in the auditing environment mostly due to the fraud scandals of companies like Enron and WorldCom in 2002. Immediately afterwards the Auditing Standards Board (ASB) approved a new standard, No. 99, in order to more clearly define the financial auditors’ responsibility concerning the detection of fraud [17]. However, because financial audits are not designed to detect fraud, they cannot be relied upon to uncover it at any significant level. This is shown in a statistic of about ten and twelve percent of all fraud detected is accredited to financial auditors [18]. Due to this lack of fraud detection in financial auditing, an increasing need for forensic auditing has arisen along with an increase in fraud education and training in all different areas of auditing.

Even before the Sarbanes-Oxley Act (SOX) of 2002, accounting students did not have adequate ethics or fraud training. Without students having that training or education, a lot of difficulty arose in the industry in recognizing fraud [17]. However, after many fraudulent scandals and the passing of SOX, fraud and ethics training has been an essential part of every accounting student’s education. These reforms resulting from SOX have specifically brought to light a number of areas in which auditing firms have been weak. A lack or shortage of staffing and experience of employees working in audit firms are some of the weak areas that led to a positive trend in the education of auditors. Due to auditing companies needing to meet regulatory requirements and the fact that baby boomers are now retiring, demand for auditors is high. Along with training in ethics, risk management, and financial statement analysis, forensic accounting is increasingly being taught and offered in accounting higher education in order to meet marketplace needs with such a high demand for auditors [20].

Not only are elements of forensic auditing being permeated into financial auditing in general, but also they are finding their way into specific areas of auditing such as internal auditing. Approaches, techniques, and objectives that internal auditors use are quite similar to those forensic auditors use, which paves a way for fraud investigations to be more a part of internal auditing now and in the future. Historically, internal auditors have just been involved with fraud investigations after the fact: to examine the breakdown of internal controls that led to the fraud and to provide recommendations to prevent it from happening in the future [19]. However, companies are now looking to internal auditors to have more of a role in fraud investigations. Without the need to hire external resources for every fraud investigation, there is a potential for high cost savings in the future. However, when investigations arise where a much more in depth knowledge and experience in forensic auditing is needed, failing to outsource for higher qualified resources can be more costly in the long run because of a poorly executed or failed investigation [19].

Careers in Forensic Auditing

In the forensic auditing field there are not only specific forensic auditing jobs, but also opportunities to apply forensic auditing techniques to all areas of auditing as afore mentioned. One does not have to be a forensic auditor to be trained in and use the same techniques. In the large firms, like the Big Four CPA Firms, there are specific careers for auditors to perform fraud audits. They are called in separately from an annual audit team if there is a predication of fraud within a company [17]. However, in smaller firms where there is not as much staff or means of having a separate forensic department, there is the potential for financial statement auditors to be more experienced and trained in fraud audits [17].

Another type of career in auditing that we have mentioned previously is in the cyber-forensics or computer forensics field. This is a relatively new field in auditing as it has only been around within the last two decades and mostly within the last ten years, but one that is rapidly expanding. There is a limited amount of cyber-forensics being taught in higher education due to it being discovered relatively recent, but as of now there is a greater need for it than ever. With the rise in the digital world and information technology, auditors are becoming more reliant on information technology and digital information in performing audits and investigating fraud. These components have become critical in forensiFragment Allocation In Distributed Database Designc auditing [18].

Along with forensic auditing or elements of it being grafted into higher education in accounting degrees, there are also a number of certifications that can be attained or are required in order to have the training and experience that auditing firms are seeking in their employees. First, accounting programs for the most part prepare students to be a Certified Public Accountant (CPA) by training them specifically for the CPA exam [18]. Other specific certifications are a Certified Information Systems Auditor (CISA) and a Certified Information Systems Security Professional (CISSP), both of which are helpful for careers in computer forensics [18]. Particular certifications that forensic auditors can obtain which will help them take specific routes within forensic auditing are a Certified Internal Auditor (CIA), a Certified Government Auditing Professional (CGAP), or a Certified Financial Services Auditor (CFSA) [21].

Conclusion

A database that consists of two or more data files located at different sites on a computer network. Because the database is distributed, different users can access it without interfering with one another. However, the DBMS must periodically synchronize the scattered databases to make sure that they all have consistent data, or in other words we can say that a distributed database is a database that is under the control of a central database management system (DBMS) in which storage devices are not all attached to a common CPU. It may be stored in multiple computers located in the same physical location, or may be dispersed over a network of interconnected computers.

Collections of data (e.g. in a database) can be distributed across multiple physical locations. A distributed database can reside on network servers on the Internet, on corporate intranets or extranets, or on other company networks. Replication and distribution of databases improve database performance at end-user worksites.

To ensure that the distributive databases are up to date and current, there are two processes:

Replication.
Duplication.
Replication involves using specialized software that looks for changes in the distributive database. Once the changes have been identified, the replication process makes all the databases look the same. The replication process can be very complex and time consuming depending on the size and number of the distributive databases. This process can also require a lot of time and computer resources.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service
Duplication on the other hand is not as complicated. It basically identifies one database as a master and then duplicates that database. The duplication process is normally done at a set time after hours. This is to ensure that each distributed location has the same data. In the duplication process, changes to the master database only are allowed. This is to ensure that local data will not be overwritten. Both of the processes can keep the data current in all distributive locations.

Besides distributed database replication and fragmentation, there are many other distributed database design technologies. For example, local autonomy, synchronous and asynchronous distributed database technologies. These technologies’ implementation can and does depend on the needs of the business and the sensitivity/confidentiality of the data to be stored in the database, and hence the price the business is willing to spend on ensuring data security, consistency and integrity.

Basic architecture
A database User accesses the distributed database through:

Local applications

Applications which do not require data from other sites.

Global applications

Applications which do require data from other sites.

A distributed database does not share main memory or disks.

Main Features and Benefits of a Distributed System
A common misconception among people when discussing distributed systems is that it is just another name for a network of computers. However, this overlooks an important distinction. A distributed system is built on top of a network and tries to hide the existence of multiple autonomous computers. It appears as a single entity providing the user with whatever services are required. A network is a medium for interconnecting entities (such as computers and devices) enabling the exchange of messages based on well-known protocols between these entities, which are explicitly addressable (using an IP address, for example).

There are various types of distributed systems, such as Clusters [3], Grids [4], P2P (Peer-to-Peer) networks, distributed storage systems and so on. A cluster is a dedicated group of interconnected computers that appears as a single super-computer, generally used in high performance scientific engineering and business applications. A grid is a type of distributed system that enables coordinated sharing and aggregation of distributed, autonomous, heterogeneous resources based on users’ QoS (Quality of Service) requirements. Grids are commonly used to support applications emerging in the areas of e-Science and e-Business, which commonly involve geographically distributed communities of people who engage in collaborative activities to solve large scale problems and require sharing of various resources such as computers, data, applications and scientific instruments. P2P networks are decentralized distributed systems, which enable applications such as file-sharing, instant messaging, online multiuser gaming and content distribution over public networks. Distributed storage systems such as NFS (Network File System) provide users with a unified view of data stored on different file systems and computers which may be on the same or different networks.

The main features of a distributed system include:

Functional Separation: Based on the functionality/services provided, capability and purpose of each entity in the system.

Inherent distribution: Entities such as information, people, and systems are inherently distributed. For example, different information is created and maintained by different people. This information could be generated, stored, analyzed and used by different systems or applications which may or may not be aware of the existence of the other entities in the system.

Reliability: Long term data preservation and backup (replication) at different locations.

Scalability: Addition of more resources to increase performance or availability.

Economy: Sharing of resources by many entities to help reduce the cost of ownership. As a consequence of these features, the various entities in a distributed system can operate concurrently and possibly autonomously. Tasks are carried out independently and actions are co-ordinate at well-defined stages by exchanging messages. Also, entities are heterogeneous, and failures are independent. Generally, there is no single process, or entity, that has the knowledge of the entire state of the system.

Various kinds of distributed systems operate today, each aimed at solving different kinds of problems. The challenges faced in building a distributed system vary depending on the requirements of the system. In general, however, most systems will need to handle the following issues:

Heterogeneity: Various entities in the system must be able to interoperate with one another, despite differences in hardware architectures, operating systems, communication protocols, programming languages, software interfaces, security models, and data formats.

Transparency: The entire system should appear as a single unit and the complexity and interactions between the components should be typically hidden from the end user.

Fault tolerance and failure management: Failure of one or more components should not bring down the entire system, and should be isolated.

Scalability: The system should work efficiently with increasing number of users and addition of a resource should enhance the performance of the system.

Concurrency: Shared access to resources should be made possible.

Openness and Extensibility: Interfaces should be cleanly separated and publicly available to enable easy extensions to existing components and add new components.

Migration and load balancing: Allow the movement of tasks within a system without affecting the operation of users or applications, and distribute load among available resources for improving performance.

Security: Access to resources should be secured to ensure only known users are able to perform allowed operations. Several software companies and research institutions have developed distributed computing technologies that support some or all of the features described above.

Fragment Allocation in Distributed Database Design
On a Wide Area Network (WAN), fragment allocation is a major issue in distributed database design since it concerns the overall performance of distributed database systems. Here we propose a simple and comprehensive model that reflects transaction behavior in distributed databases. Based on the model and transaction information, two

Heuristic algorithms are developed to find a near-optimal allocation such that the total communication cost is minimized as much as possible. The results show that the fragment allocation found by the algorithms is close to being an optimal one. Some experiments were also conducted to verify that the cost formulas can truly reflect the communication cost in the real world.

INTRODUCTION:

Distributed database design involves the following interrelated issues:

(1) How a global relation should be fragmented,

(2) How many copies of a fragment should be replicated?

(3) How fragments should be allocated to the sites of the communication network,

(4) What the necessary information for fragmentation and allocation is. These issues complicate distributed database design. Even if each issue is considered individually, it is still an intractable problem. To simplify the overall problem, we address the fragment allocation issue only, assuming that all global relations have already been fragmented. Thus, the problem investigated here is determining the replicated number of each fragment and then finding a near-optimal allocation of all fragments, including

The replicated ones, in a Wild Area Network (WAN) such that the total communication cost is minimized. For a read request issued by a transaction, it may be simple just to load the target fragment at the issuing site, or it may be a little complicated to load the target fragment from a remote site. A write request could be most complicated since a write propagation should be executed to maintain consistency among all the fragment copies if multiple fragment copies are spread throughout the network. The frequency of each request issued at the sites must also be considered in the allocation model. Since the behaviors of different transactions maybe result in different optimal fragment allocations, cost formulas should be derived to minimize the transaction cost according to the transaction information.

Alchemi: An example distributed system
In a typical corporate or academic environment there are many resources which are generally under-utilized for long periods of time. A “resource” in this context means any entity that could be used to fulfill any user requirement; this includes compute power (CPU), data storage, applications, and services. An enterprise grid is a distributed system that dynamically aggregates and co-ordinates various resources within an organization and improves their utilization such that there is an overall increase in productivity for the users and processes. These benefits ultimately result in huge cost savings for the business, since they will not need to purchase expensive equipment for the purpose of running their high performance applications.

The desirable features of an enterprise grid system are:

Enabling efficient and optimal resource usage.

Sharing of inter-organizational resources.

Secure authentication and authorization of users.

Security of stored data and programs.

Secure communication.

Centralized / semi-centralized control.

Auditing.

Enforcement of Quality of Service (QoS) and Service Level Agreements (SLA).

Interoperability of different grids (and hence: the basis on open-standards).

Support for transactional processes.

Alchemi is an Enterprise Grid computing framework developed by researchers at the

GRIDS Lab, in the Computer Science and Software Engineering Department at the University of Melbourne, Australia. It allows the user to aggregate the computing power of networked machines into a virtual supercomputer and develop applications to run on the Grid with no additional investment and no discernible impact on users. The main features offered by the Alchemi framework are:

Virtualization of compute resources across the LAN / Internet.

Ease of deployment and management.

Object-oriented “Grid thread” programming model for grid application development.

File-based “Grid job” model for grid-enabling legacy applications.

Web services interface for interoperability with other grid middleware.

Open-source .Net based, simple installation using Windows installers.

Alchemi Grids follow the master-slave architecture, with the additional capability of

Connecting multiple masters in a hierarchical or peer-to-peer fashion to provide

Scalability of the system. An Alchemi grid has three types of components namely the

Manager, the Executor, and the User Application itself. The Manager node is the master / controller whose main function is to service the user

Requests for workload distribution. It receives a user request, authenticates the user, and distributes the workload across the various Executors that are connected to it. The

Executor node is the one which actually performs the computation. Alchemi uses role based Security to authenticate users and authorize execution. A simple grid is created by Installing Executors on each machine that is to be part of the grid and linking them to a Central Manager Component.

Advantages of distributed databases
Management of distributed data with different levels of transparency.

Increase reliability and availability.

Easier expansion.

Reflects organizational structure database fragments are located in the departments they relate to.

Local autonomy a department can control the data about them (as they are the ones familiar with it.)

Protection of valuable data if there were ever a catastrophic event such as a fire, all of the data would not be in one place, but distributed in multiple locations.

Improved performance data is located near the site of greatest demand, and the database systems themselves are parallelized, allowing load on the databases to be balanced among servers. (A high load on one module of the database won’t affect other modules of the database in a distributed database.)

Economics it costs less to create a network of smaller computers with the power of a single large computer.

Modularity systems can be modified, added and removed from the distributed database without affecting other modules (systems).

Reliable transactions – Due to replication of database.

Hardware, Operating System, Network, Fragmentation, DBMS, Replication and Location Independence.

Continuous operation.

Distributed Query processing.

Distributed Transaction management.

Disadvantages of distributed databases
Complexity extra work must be done by the DBAs to ensure that the distributed nature of the system is transparent. Extra work must also be done to maintain multiple disparate systems, instead of one big one. Extra database design work must also be done to account for the disconnected nature of the database for example, joins become prohibitively expensive when performed across multiple systems.

Economics increased complexity and a more extensive infrastructure means extra labour costs.

FPGA Architectures and Implementing a Counter for Terasic Altera DE0 FPGA Board

Security remote database fragments must be secured, and they are not centralized so the remote sites must be secured as well. The infrastructure must also be secured (e.g., by encrypting the network links between remote sites).

Difficult to maintain integrity — in a distributed database, enforcing integrity over a network may require too much of the network’s resources to be feasible.

Inexperience distributed databases are difficult to work with, and as a young field there is not much readily available experience on proper practice.

Lack of standards there are no tools or methodologies yet to help

2018

Table of Contents

Table of Figures

List of Tables

1 Introduction – FPGA Architectures

1.1 What is FPGA?

1.2 Why FPGA?

1.3 Stakeholders

2 Issues with the current approach

2.1.1 The FPGA Standoff

2.1.2 Why FPGA Over Other Chips?

3 Aim and Objectives

3.1 Objectives

3.2 Research Questions

3.3 Added Value and MOV

4 Approach for the Research Paper

4.1 Development Methodology

4.1.1 Design Entry

4.1.2 Functional Simulation

4.1.3 Analysis & Synthesis

4.1.4 Place & Routes

4.1.5 Simulation & Synchronization

4.1.6 Programming and Configuration

4.1.7 Download to Chip

4.2 Deliverables

4.3 Resources Required

4.4 Project Risks

5 Team & Planning

5.1 Gantt Chart

References

Table of Figures

Figure 1: Diagrammatic representation of an average FPGA board

Figure 2: Representation of gates in FGPA and CPU

Figure 3: Altera FPGA design flow

Figure 4: Gantt Chart representation for all the scheduled tasks

List of Tables

Table 1: Comparative analysis between FGPA, CPU, GPU, DSP processing

SYSTEMATIC REVIEW OF FPGA ARCHITECTURES AND HOW TO GUIDE FOR IMPLEMENTING A COUNTER FOR TERASIC ALTERA DE0 FPGA BOARD

1         Introduction – FPGA Architectures

The aim of our research is to explore Field Programmable Gate Array (FPGA) by performing a systematic review of FPGA architectures and preparing a how to guide document for implementing a counter on Terasic DE0 FPGA board which uses Altera Cyclone III chip.

The research will have three parts to it.

      Systematic review of FPGA architectures

      Implementation of a counter on Terasic DE0 FPGA board

      How to guide document of implementing a counter on Terasic DE0 FPGA board which use Altera Cyclone III chip

Systematic review will be focused on available FPGA architectures by reviewing previous literature and by analyzing different FPGA architectures developed by different vendors in the market.

Part two will be oriented more towards the practical aspect of the research. We will develop a counter using Verilog Hardware Description Language and implement it on the FPGA chip by configuring and downloading it to DEO board.

Based on the practical performed in part two we will implement the counter on the Terrasic DEO FPGA board. Finally, we will document how we did it so that it will be beneficial for the future students to understand the complexities we faced and how we overcame them. Also, it will become basis for the further research that our successors will perform.

To understand the scope and the aim of the project, it is important to first understand some fundamentals of FPGA. Following are some important questions that you should know about FPGA.

1.1         What is FPGA?

FPGA stands for Field Programmable Gate Array. In layman language, it can be configured or reconfigured after it is manufactured by the end user or the manufacture itself. Technically, the manufactured board has the elements embedded in it but they are not wired. The end user can use design software to program the gates to make the board function in a particular way.

Figure 1: Diagrammatic representation of an average FPGA board

1.2         Why FPGA?

The importance of FPGA stems from the very basic concepts of reuse, reduce and fast. To understand this, we will have to understand not many companies have enough resources to research and develop low level hardware like CPUs. To epitomize, intel has to spend billions of dollars, months of time and other resources to fix any hardware bugs. With the implementation of FGPA architecture, it can minimize this loss of resources (reduce) and then the end user can reuse the hardware for multiple applications as it can be reprogrammed for every use.

Furthermore, FPGAs are fast. It is because they have multiple inputs and outputs available and it can be programmed so that it can take multiple inputs, quickly process it and give simultaneous outputs. Which cannot be the case with the ordinary hardware.

Nevertheless, there are many other adverse effects with current approach of manufacturing hardware which can be reduced by switching to FPGA as they promote reusability and reducibility. One of them is the environmental impact that is caused due to the manufacturing and dismantling process of such hardware as they are not environment friendly and cannot be easily disposed and end up in landfills.

Table 1: Comparative analysis between FGPA, CPU, GPU, DSP processing

1.3         Stakeholders

This research paper on Field Programmable Gate Array, FPGA has the following entities as the stakeholders.

  • Dr. Firas Al-Ali and Dr. Fadi Fayez
  • Manukau Institute of Technology
  • FPAG Starters
  • Industries in alliance with FPGA

2         Issues with the current approach

FPGA as a technology is more prevalent than a normal eyes notices around it. If you try to find out, you will be amazed to know it is implemented in the various sectors which we tend to use every day. It is used in the cell phone towers to do signal integrity, decoding and sending Ethernet packages. It is used in defense and military systems. It is used in the medical industry in machines like MRI. All this point in one direction, FPGA is used in the application that requires high processing power and is fast. It is safe to say that the current approach is headed towards the right direction where FPGA will be more prevalent and will replace the technology that we use today. We have established that various industries are using FPGA but that is just a small portion of the market there has to be some roadblocks for the technology which needs research and development to evolve and grow.

2.1.1        The FPGA Standoff

FPGA has some deadlocks in the way it has to be implemented. Several companies and individuals are attempting to overcome these. Following are the issues that FPGA currently faces. (Green, 2014) (A. Muthuramalingam, 2008)

  • High Cost
  • Difficult design and implementation
  • Unreliable functioning

High Cost; As the technology is relatively new and not many manufactures are in the market to produce them. The biggest player in this sector is Intel which has recently acquired the company Altera which produces the FPGA boards. (Intel, n.d.) All of this drives the cost of the chip higher as compared to the other boards available in the market.

Difficult design and implementation; All the low-level elements of the board are embedded but not wired in FPGA boards. For the record, just to boot up the device user has to program the device and make sure hundreds of elements are connected perfectly from the hardware side and the software is running in sync. (EEVBlog, 2014)

Unreliable functioning; As there are multiple input outputs in the board, when working with them and running large instances it is hard to debug and fix errors. Moreover, for larger instances often users have to use more than one FPGA boards which makes the problem even harder to solve.

2.1.2        Why FPGA Over Other Chips?

FPGA was first commercially introduced in the mid-80s and were pretty simple and didn’t do much. Today, FPGAs have become very complex with hundreds and thousands of gates in them which can be configured and reprogrammed according to the need. Some of the biggest attractions for FPGA over other boards are as follows. (NandLand, 2015)

  • Fast
  • Reprogrammable
  • High power

Fast; FPGA has high support for I/O which means more pins are available on the board. To epitomize, end user can run multiple cores at once and get it to run multiple instance at once get results simultaneously.

Figure 2: Representation of gates in FGPA and CPU

Reprogrammable; It is one of the most attractive feature of FPGA boards. On an average board, user can use a design suit like ISE or NIOS II to program the gates available on the board to force the board to function according to the need.

High Power; Re-Programmable and Fast characteristic of FPGA makes it very powerful which compensates for its high cost. With reprogramming the board can be set to accelerate the normal processes and overclocking becomes easier and can be done with way more potential.

3         Aim and Objectives

Aim:

     Develop a How to Guide based on practical implementation of a counter on Terasic DE0 board which use Altera Cyclone III FPGA chip.

     Develop a Systematic Review of FPGA Architectures: Survey, Observations and Future Trends

3.1         Objectives

  • Develop a counter by using Verilog.
  • Implement it on Terasic Development And Education 0 (DE0) board.
  • Prepare a How to Guide document on the practical.
  • Develop a Systematic Review on FPGA architectures.

3.2         Research Questions

      Lack of comparison between different FPGA architectures.

      Difficult for students and starters to get in to FPGA technology.

3.3         Added Value and MOV

Along with the deliverables of the research there is a How to guide which will be a documentation on how we worked and implemented our practical on the supplied FPGA board. This documentation will be of significant value to our successors researching in the same segment.

Secondly, once the research paper and implementations are completed we aim to showcase our work in Information Technology conferences.

4         Approach for the Research Paper

4.1         Development Methodology

A General Altera FPGA design flow for Terasic boards will be used for the practical implementation. This budding technology will not be having abundant support from other resources, so we will need to plan adequately and implement properly. Following is the diagram highlighting the technicalities and the work flows that our approach will be based upon.

Figure 3: Altera FPGA design flow

4.1.1        Design Entry

Both HDL (Hardware Description Language) text entry and Schematic designing by Quartus 11 Schematic Editor will be used for the Design Entry phase. Verilog will be the preferred HDL.

4.1.2        Functional Simulation

Test waveforms will be used as functional simulations for testing the design.

4.1.3        Analysis & Synthesis

Here design source files will be checked for errors, optimized and listed generating a connection list. The process is also called as net listing. Then this netlist is mapped in to the FPGA architecture.

4.1.4        Place & Routes

In this phase logical design will be fitted in to a smallest possible part of the chip by using a “Balanced” approach. Both RTL viewer and chip planner will be used as graphical representations of chip and its logical resources. Quarters 11 fitter will use the database generated in previous phase to place all the logic functions in most suited logic cells in order to provide best routing and timing. Hence the interconnections between cells will be placed and pin assignments will be done accordingly.

4.1.5        Simulation & Synchronization

Timing tools such as TimeQuest timing analyzer will be used for static timing analysis. Issues with clock synchronizing will be addressed. Model Sim will be used for timing simulations and to generate accurate timing diagrams against every signal of the design.

4.1.6        Programming and Configuration

Finally, Quartus 11 Assembler will be used to generate the programming files out of successfully compiled logic design with pin assignments etc.

4.1.7        Download to Chip

A USB Blaster will be used to download the program files to the chip. Quartus 11 Programmer option will be used for this step.

4.2         Deliverables

The project deliverables will be as follows

  1. How to Guide
  2. Implementation of reaction timer program on FPGA Board
  3. Literature Review

4.3         Resources Required

This is more of a theoretical project as it is more research oriented. The only resources we need are the hardware components to implement the practical side of the research which is to implement a reaction timer program on FPGA Board.

      Terasic Altera DE0 Cyclone III FPGA Board

      Quartus II Software suit

      A computer

      Previous documentations and how to guides for other available boards. NOTE – There is no “how to guide” on practical FPGA implementations from the scratch available on the Terasic Altera Cyclone III FPGA Board in MIT.

4.4         Project Risks

After a group self-analysis, we came up with some of the risks that we have which could affect our research. We will discuss them in the following section.

  • Time Crunch
  • Lack of expertise
  • Hardware

Time Crunch; All of our group members our international students working 20 hours and working on at least one more paper alongside with this research for Hot Topic. On the top, all of this is concentrated over eight weeks. Balancing work, education and life is a very challenging task during these eight weeks. If not done perfectly, time will be an obstacle for our team.

Lack of expertise; FPGA is an untouched topic for all the members of our team. Moreover, one of the members is from Software major who never even came close to hardware. We have two Networking major students in the team who are confident and believe that FPGA is easy and can be learnt and implemented according to this proposal. Anyhow, there are hundreds of pages long documentations and blogs that we have to read before we could start any research or do write ups on the topic.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

Hardware; we have selected hardware according to the needs of our project from the range of hardware available from Dr. Firas Al-Ali. Again, we are not experienced to work with any of this hardware. Moreover, the hardware is not equipped with the latest gizmos which makes everyone’s life easy and there is not enough support available online. So, in cases we have to adopt trial and error approach while working with the hardware.

5         Team & Planning

As a team, in our first minute meet we tried and divided responsibilities amongst ourselves. Rakshit Bhaskar was selected as the Team Leader and all the members of the team were allocated tasks.

5.1         Gantt Chart

Based on the team meetings and decisions we have allocated all the tasks a start date, an end date and duration of the tasks. This helped us in our project management and we created a Gantt chart to depict the flow we aim to work during the research in order to achieve our objectives. The tasks are divided in five major segments namely Planning, Designing, Course Work, Testing and Documentation. The following image provides more insights on the categories and tasks allocated under it.

Figure 4: Gantt Chart representation for all the scheduled tasks

References

  • Muthuramalingam, S. H. (2008). Neural Network Implementation Using FPGA: Issues and Application. Auckland: International Journal of Electrical and Computer Engineering.
  • Bhaskar, R. (2018). DE0 with Cyclone III FPGA chip. Manukau Institute of Technology, Auckland.
  • EEVBlog (2014). EEVblog #635 – FPGA’s Vs Microcontrollers [Recorded by EEVBlog].
  • Green, R. (2014, 09 19). Five Challenges to FPGA-Based Prototyping. Retrieved from EE Times: https://www.eetimes.com/author.asp?section_id=36&doc_id=1324000
  • Intel. (n.d.). Intel FPGAs and Programmable Devices. Retrieved from Intel: https://www.intel.com/content/www/us/en/products/programmable.html
  • Moore, A., & Wilson, R. (2017). FPGA for Dummies (Vol. 2nd intel edition). (R. Wilson, Ed.) John Wiley & Sons, Inc.
  • Nanayakkra, H. (2018, 10 29). Gantt Chart – FPGA – Hot Topic.
  • NandLand (2015). What is an FPGA? Intro for Beginners [Recorded by NandLand].
  • Stemmer Imaging. (2018). A list of key differences between FPGA, DSP, GPU and CPU. Retrieved from Stemmer Imaging: https://www.stemmer-imaging.co.uk/en/technical-tips/introduction-to-fpga-acceleration/
  • Stemmer Imaging. (n.d.). Introduction to FPGA acceleration. Retrieved from Stemmer Imaging: https://www.stemmer-imaging.co.uk/en/technical-tips/introduction-to-fpga-acceleration/
  • Stephen Brown, J. R. (1996). Architecture of FPGAs and CPLDs. University of Toronto, Department of Electrical and Computer Engineering, Toronto.
  • Taylor, R. (2017, 11 17). FPGAs Supercharge Computational Performance . Retrieved from infoQ: https://www.infoq.com/articles/fpga-computational-performance
  • Xilinx. (2018). Field Programmable Gate Array (FPGA). Retrieved from Xilinx: https://www.xilinx.com/products/silicon-devices/fpga/what-is-an-fpga.html

 

users convert a centralized DBMS into a distributed DBMS.

Database design more complex besides of the normal difficulties, the design of a distributed database has to consider fragmentation of data, allocation of fragments to specific sites and data replication.

Additional software is required.

Operating System should support distributed environment.

Concurrency control: it is a major issue. It is solved by locking and time stamping.

 

Share this: Facebook  Twitter  Reddit  LinkedIn  WhatsApp

Fraud and forensic auditing will continue to impact businesses in the future. Companies will need to ensure that their fraud practices are keeping up with emerging technologies. Just as technology has advanced in the past thirty years, so have incidences of fraud. Thus, as we have seen the complexity of fraud and forensic audit techniques have greatly improved. Computer forensics has expanded the capabilities of these techniques and will continue to grow in importance because of the continuing growth in technology. Fraud and forensic auditors are becoming more educated and trained in fraud and forensic auditing which will increase the amount of fraud uncovered. Overall, fraud and forensic auditing is vital to properly utilizing evidence in a court of law.

 

f ethics when carrying out their duties, for displaying an indifferent and bored attitude will reflect badly on the company employing them.

Appendix
I, YONG SUET YAN hereby confirm that this assignment as my own and not copied or plagiarized from any sources. I have referenced the sources from which information is obtained by me for this assignment.

____________ ___________
Signature Date

 

Share this: Facebook  Twitter  Reddit  LinkedIn  WhatsApp

IDS manager

Integrated delivery systems (IDS) very need for this company manages process text arrangement Manager writes to handle IDS sensors in a distributed situation.

This is done by having the capability to receive the text arrangement files and allow you to change them with an easy to use Graphical interface. With the additional capacity to merge new rule sets, manage preprocessors, arrange output module and steadily copy system to sensors, IDS Manager Makes managing Snort easy for most security professionals.

 

each terminal type, a piece of software must be written to map the functions of the network virtual terminal onto the real terminal. For example, when the editor moves the virtual terminal’s cursor to the upper left-hand corner of the screen, this software must issue the proper command sequence to the real terminal to get its cursor there too. All the virtual terminal software is in the application layer.

Another application layer function is file transfer. Different file systems have different file naming conventions, different ways of representing text lines, and so on. Transferring a file between two different systems requires handling these and other incompatibilities. This work, too, belongs to the application layer, as do electronic mail, remote job entry, directory lookup, and various other general purpose and special-purpose facilities.

 

Transferring files between machines (and users) is a common daily occurrence although the confidentiality of the data is a basic condition. Now problem was how to secure them from inadvertent addressee from observing the data, which are supposed to confidential and likely on risk if prepared well-known to negligent parties. In each of these cases, it’s important to know what options are available to get your file from point A to point B and to comprehend whether the technique you choose provides sufficient security given the sensitivity of the data being transferred.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service

Cryptography is ability of secret text, or more precisely of stock up resource (for a long or shorter period of time) in a shape which allows it to be revealed to those you wish to see it yet hides it from all others. A cryptosystem is a technique to accomplish this. Cryptanalysis is to put into practice to overcome such endeavors to hide information. Cryptology comprises of both cryptography and cryptanalysis.

The unique information to be hidden is called “plaintext”. The concealed information is called “ciphertext”. Encryption or Decryption is any modus operandi to convert plaintext into ciphertext.

A cryptosystem is designed so that decryption can be consummated only under certain conditions, which usually means simply by persons in control of both a decryption engine (these days, generally a computer program) and a meticulous piece in sequence, called the decryption key, which is supplied to the decryption engine in the course of decryption.

Plaintext is transformed into ciphertext by process of an encryption engine (again, generally a computer program) whose operation is fixed and determinate (the encryption method) nevertheless which functions in practice in a way dependent on a piece of information (the encryption key) which has a major effect on the output of the encryption process.

The main purpose was to make sure privacy while you transferring your private data from one place to another place do not matter electronically or via users. There were many scheme but very complicated to follow them and most important less security.

So time by time many scientists discover different techniques but Gentry’s technique “Fully Homomorphic Encryption” got a tremendous value against all technique. All others techniques were performing well but with restriction but Gentry’s scheme user can perform unlimited action.

Objective

Cloud computing

Literature review

“Homomorphic encryption is a paradigm that refers to the ability, given encryptions of some messages, to generate an encryption of a value that is related to the original messages. Specifically, this ability means that from encryptions of k messages (m1,…,mk), it is possible to generate an encryption of m* = f(m1,…,mk) for some (efficiently computable) function f. Ideally, one may want the homomorphically generated encryption of m* to be distributed identically (or statistically close) to a standard encryption of m*. We call schemes that have this property strongly homomorphic. Indeed, some proposed encryption schemes are strongly homomorphic w. r. t some algebraic operations such as addition or multiplication.” (Rothblum R, 2010).

“An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences:

1. Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intended recipient. Only he can decipher the message, since only he knows the corresponding decryption key.

2. A message can be “signed” using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in “electronic mail” and “electronic funds transfer” systems.” (Rivest et al, 1978)

“Homomorphic encryption enables “computing with encrypted data” and is hence a useful tool for secure protocols. Current homomorphic public key systems have limited homomorphic properties: given two ciphertexts Encrypt (PK, x) and Encrypt (PK, y), anyone can compute either the sum Encrypt (PK, x+y), or the product Encrypt (PK, xy), but not both.” (Boneh et al, 2006)

ARMONK, N.Y 25 Jun 2009: “An IBMResearcher has solved a thorny mathematical problem that has confounded scientists since the invention of public-key encryption several decades ago. The breakthrough, called “privacy homomorphism,” or “fully homomorphic encryption,” makes possible the deep and unlimited analysis of encrypted information data that has been intentionally scrambled without sacrificing confidentiality.” (IBM, 2009)

“We propose the first fully homomorphic encryption scheme, solving a central open problem in cryptography. Such a scheme allows one to compute arbitrary functions over encrypted data without the decryption key i.e., given encryptions E(m1) ,…,E(mt) of m1,….,mtone can efficiently compute a compact ciphertext that encrypts f(m1,….,mt) for any efficiently computable function ƒ. This problem was posed by Rivest et al. in 1978.” (Gentry C, 2009)

“Searching databases is usually done in the clear. And even if the query is encrypted, it has to be decrypted (revealing its contents) before it can be used by a search engine. What’s worse is that databases themselves are stored as plaintext, available to anyone gaining access. The smarter way to handle sensitive information would be to encrypt the queries, encrypt the database and search it in its encrypted form. Impossible until now, IBM’s T.J. Watson Research Center (Yorktown Heights, N.Y.) recently described a “homomorphic” encryption scheme that allows encrypted data to be searched, sorted and processed without decrypting it. Fully homomorphic encryption schemes theoretically allow ciphertext to be manipulated as easily as plaintext, making it perfect for modern cloud computing, where your data is located remotely.” (Johnson R C, 2009)

Body

History of Cryptography

In earliest era communications or correspondence among recipient and correspondent were only possible through extremely safe and sound way like loyal pigeon, physically or any other source but must be trusted. That was a time when it was very tough to believe or trust on available sources. There was a little doubt and big risk for the sender was if transporter discloses the information then any one can harm them. Progressively a newly ideas came with world called Cryptography/Encryption” means this is a technique in which the sender encrypts the communication using proper key and it’s only possible for receiver to decrypt it if he possessed the key.

Key based Encryption.

In key based encryption keys are the most important part of creating new ciphertext. A sequence of small piece used generally in cryptography, letting people to encrypt/decrypt facts and the same key can be used to carry out additional mathematical business as well. Specified a secret message, a key established the connection with the sequence to the ciphertext.
The key we use for a special cryptosystem has worth so whenever this key used to ciphertext, always lets the encrypted communication to be decrypted and always doing reverse like encrypt the plaintext.

In ancient era because calculation was very hard so they prefer to use not lengthy keys in respect of bits but on the other hand it’s safe to use longer key. Communications also one can encrypt in n-bit blocks. It is true that the longer a key is, more difficult for one to break the encrypted message. Encryptions consist of two categories.

  • Private Key or Symmetric Key Encryption
  • Public Key or Asymmetric Key Encryption

Private Key / Symmetric Key Encryption

This was thousands of years ago when Julian Caesar used this scheme to send his communication to his military. He used very simple key based classic cryptographic algorithm in which he just shifted each letter with preplanned key number 4. In his algorithm key varies so that’s why we cannot guess what number he will use next. Let’s take said number 4 which means “A” will swap with “D” and “B” will swap with “G” and so on “X” will swap with “A” etc.

ABCDEFGHIJKLMNOPQRSTUVWXYZ

DEFGHIJKLMNOPQRSTUVWXYZABC

The same letter changing technique was useful to small case correspondence and also covering around the letters as well. (S. Tewksbury).

Cryptography history is very old so we can divide it in to two categories.

Classic era Cryptography Computer era Cryptography

In classic era there was no computer or any electronic machine to solve this problem so people were used pen and paper to unreveal the truth of letters. Julian Caesar technique is classic era practice. Until WWII all cryptography techniques are none as classic era cryptography. After WWII development of machine made cryptography life very complex and that time was very easy to break all classic time encryptions mostly called key based techniques. Key word was very important in these practices and because of the key it was very easy to break through encryption algorithm. ROT13 is the best practice of encryption algorithm which we know its famous name Caesar cipher and this is extension of Julian Caesar scheme. The most useful technique was ROT13 in which they used fix key 13 to encrypt the letters. This algorithm was very famous in the beginning of computer era and anyone wants to use ROT13 scheme, both side parties must use the same key to encrypt and decrypt the code. This key called secret key. The development of the machine set a stander in respect of key codes and then everyone prepared a code book to share as a key code book.

For example in ROT13 simply they rotate the letters by 13 places. Application of this scheme is very easy like Julius Caesar technique where he swapped letters with fix key 4 and now in ROT13 with key 13 and wrapping around like “a” become “n” and “m” become “z” and wrapping continue if necessary but the problem was user can play only English alphabet. The beauty of this technique was it made its function its own inverse like for any text x we can write its function mathematically inverse of ROT13(x) or ROT13 (ROT13(x)) where x is belong to a character which one wants to encrypt.

This characteristic furthermore called an involution in arithmetic and a give-and-take code in cryptography. This scheme work as below

ABCDEFGHIJKLM ↔ abcdefghijklm

NOPQRSTUVWXYZ ↔ nopqrstuvwxyz

In this scheme problem was again if someone steel or rob your data then it is very easy to decode it. This is not reasonable cryptographic proposal even though it’s known as secret key cryptosystem.

If we observe closely the ROT13 is partially homomorphic particularly with respect to the concatenation function because it has a reciprocal property. Let’s write a function to prove its homomorphic property using secret key 13, in this function we encrypt the text using said algorithm and we will add the encrypted text to see its homomorphic property and then finally decrypt the result.

Java ROT13 Code.

import java.util.*;

public class ROT13

{ static int x,y,n,fx,l,m;

public static void main(String[] args)

{

Scanner sc=new Scanner(System.in);

System.out.println(“Enter your text”);

String t = sc.nextLine();

int j=0;

int key=13;

for (int i=0; i< t.length(); i++)

{

char ch3 = t.charAt(j);

if (ch3 >= ‘a’ && ch3 <= ‘m’) ch3 += key;

else if (ch3 >= ‘n’ && ch3 <= ‘z’) ch3 -= key;

else if (ch3 >= ‘A’ && ch3 <= ‘M’) ch3 += key;

else if (ch3 >= ‘A’ && ch3 <= ‘Z’) ch3 -= key;

System.out.print(ch3);

j++;

}}}

OUTPUT

Enter your text

HelloWorld

UryybJbeyq

The above algorithm is very uncomplicated algorithm to illustrate how ROT13 scheme works and in above output “Uryyb Jbeyq” is encrypted cipher formed with above algorithm. To check its homomorphic property now anyone can break this cipher text and then apply a concatenation (addition operator) to this text. After getting a new text anyone can apply ROT13 algorithm to decode it to see if he/she is getting the original text.

import java.util.*;

public class ROT13

{

static int x,y,n,fx,l,m;

public static void main(String[] args)

{

Scanner sc=new Scanner(System.in);

System.out.println(“Enter yout text”);

String t = sc.nextLine();

int j=0;

int key=13;

for (int i=0; i< t.length(); i++)

{

char ch3 = t.charAt(j);

if (ch3 >= ‘a’ && ch3 <= ‘m’) ch3 += key;

else if (ch3 >= ‘n’ && ch3 <= ‘z’) ch3 -= key;

else if (ch3 >= ‘A’ && ch3 <= ‘M’) ch3 += key;

else if (ch3 >= ‘A’ && ch3 <= ‘Z’) ch3 -= key;

System.out.print(ch3);

j++;

}

System.out.println();

System.out.println(“Enter yout 2nd text”);

String t1 = sc.nextLine();

int j1=0;

int key1=13;

for (int i1=0; i1< t1.length(); i1++)

{

char ch3 = t1.charAt(j1);

if (ch3 >= ‘a’ && ch3 <= ‘m’) ch3 += key1;

else if (ch3 >= ‘n’ && ch3 <= ‘z’) ch3 -= key1;

else if (ch3 >= ‘A’ && ch3 <= ‘M’) ch3 += key1;

else if (ch3 >= ‘A’ && ch3 <= ‘Z’) ch3 -= key1;

System.out.print(ch3);

j1++;

}

System.out.println();

System.out.println(“Enter the 1st encrypted result=”);

String a=sc.nextLine();

System.out.println();

System.out.println(“Enter the 2st encrypted result=”);

String a1=sc.nextLine();

String con = a+a1;

System.out.print(con);

System.out.println();

int j2=0;

int key2=13;

for (int i2=0; i2< con.length(); i2++)

{

char ch3 = con.charAt(j2);

if (ch3 >= ‘a’ && ch3 <= ‘m’) ch3 += key2;

else if (ch3 >= ‘n’ && ch3 <= ‘z’) ch3 -= key2;

else if (ch3 >= ‘A’ && ch3 <= ‘M’) ch3 += key2;

else if (ch3 >= ‘A’ && ch3 <= ‘Z’) ch3 -= key2;

System.out.print(ch3);

j2++;

}}}

OUTPUT

Enter the 1st encrypted result=Uryyb

Enter the 2st encrypted result=Jbeyq

UryybJbeyq

HelloWorld

Explanation of Output

Text a = Encrypt (13, “Hello”); a = Uryyb

Text b = Encrypt (13, “World”); b = Jbeyq

Text c = Concat (a,b); c = UryybJbeyq

Text d = Decrypt(13, c); d = HelloWorld

As we can see clearly that we used an addition (concat) property to encrypt the text but after this we got the same result as we got without using concat. This property demonstrates that ROT13 is partially homomorphic scheme with respect of addition.

The problem start with this technique when machine came in to being and it was easy to break secret code and even drawback of this scheme was numbers because user only were to able to encrypt alphabetic. Then gradually, ROT47 new scheme introduced and this scheme was derived from ROT13 as-well. Inside this scheme there was a big border for its users so now they were able to play with numbers and special characters. ROT47 exercise a larger alphabet, resulting from a regularcharacter programmingwell-known asAmerican Standard Code for Information Interchange (ASCII).

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

The ASCII is a 7-bit code to correspond to English alphabet structure and these codes are in practice to symbolize data which includes numbers used in central processing unit, interactions technology and additional associated mechanism. The first publication of this standard code was in 1967 then afterward restructured and produced as “ANSI X3.4-1968”, at that time as “ANSI X3.4-1977” and at last as “ANSI X3.4-1986”. It is given that, it is a seven-bit code and it preserves the largest part symbolizing 128 characters. It presently characterize 95 printable characters together with 26 upper-case letters (A to Z), 26 lower-case letters (a to z), 10 numbers (0 to 9) and 33 special characters as well as arithmetic signs, punctuation marks and space character. . (Maini A K, 2007)

However ROT13 introduced with new values of its alphabets separately both capital and smaller. Unlike ROT13, ROT47 was also not able to protect your text at all. This scheme is also having homomorphic property like addition. If closely observe the both scheme then we will be able to see that there is only little difference in both schemes. Both working pattern is same, both dealing with alphabets but ROT47 got advantage because this scheme deal with numbers and individual characters. In this method ASCII cipher connect to trade letters or numbers during encryption/decryption. Knowledge of ASCII codes to one lead to revel the facts. So here this scheme becomes the same like ROT13, so failure of this scheme once again involvement of the secret key.

Is Symmetric Key Encryption Secure?

ROT13 encryption scheme is not secured at all because the code of this scheme you can decode very easy. This was the disadvantage of this scheme.

The reason we encrypt our transcript is to make it protected from illegitimate access however this scheme only consist of 26 characters which is very simple to decipher even from side to side a common person who have an access to the written text.

For example: Anyone wishes to encrypt “atotaa”, after that the cipher we will achieve “ngbgnn” which is very effortless to work out through repetition of “a & g”.

ROT47 was novel encryption scheme derived from ROT13and also another example of symmetric key encryption but bit difficult. In ROT47 moving the basic letter swiftly like ROT13 with given substitute of ASCII. In this scheme one can take care of numbers and many other special characters as a substitute of the basic 26 letters however awareness of ASCII codes can show the way to one to search out the facts. Consequently, at this point this scheme turn into insecure category like ROT13, so failure of this scheme was once again its own typical contribution of the ASCII codes.

Public Key or Asymmetric Key Encryption

An important contribution in the peak field that time named “public-key cryptography” fulfilled by Whitfield Diffie, Martin Hellman and Ralph Merkle in 1976 when they introduce an elegant cryptosystem for a public-key. The major difference as compare to prior scheme was one extra key named as public key. The public key assume to be used for encryption and then private key will use to decryption.

Cryptography has been a derivative security entirety once a secure channel exists along which keys can be transmitted, the security can be extended to other channels of higher bandwidth or smaller delay by encrypting the messages sent on them. The effect has been to limit the use of cryptography to communications among people who have made prior preparation for cryptographic security.

(W Diffie and M Hellman, 1976)

ABOVE NOT COMPLETE YET

RSA respected the idea of Diffie et al and in 1978 they introduced first public key algorithm in public at MIT byRon Rivest,Adi Shamir, andLeonard Adleman. They illustrate what is predetermined by a trapdoor cipher, but how do you construct one? One usually used of the secret message of this type is called RSA encryption, wherever RSA are the initials of three initiators which are Rivest, Shamir, and Adleman.

It is based on the idea below; it is simply multiply numbers together, particularly with the help of computers reason, factorization of this numbers could be difficult.

To get them, one needs to factor N, which seems to be an extremely complex problem. But exactly how is N used to encode a message, and how are p and q used to decode it? Below is presented a complete example, although there will be used minute prime numbers so it is easy to follow the arithmetic.

Actually in RSA encryption scheme they used very big prime numbers. As per them it makes scheme more secure because in their algorithm they need to factorize the number to get the result. If someone using small number then it’s easy to factorize the number but it is not the same with big number. Therefore, they in their algorithm they used key size 768-bit for ordinary use and they suggest 1024-bit key size for commercial use but for highly important information key size should be double (2048-bit) as compare to business key size just for mind satisfaction regarding security threat.

RSA advised to one and all concerning their scheme that how scheme work to get own encryption and decryption key if any want using their method. First step decide two separate prime numbers like p, q. Later than multiply integers pq and make n = pq public. Exposing n in public will help one to hide original integers like q & q and now it will be very difficult for illegitimate person to find original integers p & q because factorization will be very hard for big prime numbers. This achievement will help to hide the value of multiplicative inverse d and the way derived from co-prime e. Choosing big integer d but d must comparatively prime with φ((p-1).(q-1)) and must fulfill the condition of greater common devisor gcd (d, (p-1)(q-1)). Finally one can compute the integer e “1 <e< φ(n)”,=”” from=”” p,=”” q=”” and=”” d=”” will=”” be=”” the=”” multiplicative=”” inverse.=”” following=”” above=”” tedious=”” method=”” one=”” can=”” decrypt=”” or=”” encrypt.=”” <h3=””>Mathematically Implementation of RSA algorithm</e<>

RSA algorithm steps below

  • Two prime integers p=61 and q=53
  • Multiply both prime integers n = pq = 61.53=3233. The value of n afterward used as modulus for public and private key.
  • Calculate φ(n) = (p-1).(q-1) = 3120. Where φ is Euler’s totient function.
  • For the value of e = 17 select any integer from 1<e< φ(n)=”” and=”” chosen=”” integer=”” must=”” satisfy=”” this=”” condition=”” where=”” gcd=”” (e,=”” φ(n))=”1.</li”></e<>
  • One can conclude d = e-1 mod φ(n). The value of d = 2753 will be using in private key exponent so supervision of this key is essential. Extended Euclidean algorithm helps to determine the d.
  • Thepublic keywill be (n= 3233,e= 17) and for text m the encryption function is m17 mod φ(n).
  • Theprivate keyis (n= 3233,d= 2753) and for the encrypted text c decryption function will be cd mod φ(n).

For example: Encryptm= 65, we compute

c= 6517(mod 3233) = 2790.

For decryptc= 2790, we calculate m= 27902753(mod 3233) = 65.

Using the above boring however easy for a computer to calculate, One can decode other’s message and obtain the original message m = 65.

Java Code for RSA Algorithm:

public class RSACode

{ static long x,y,n,fx,l,m;

static int p,q,e,tn;

public static void main(String[] args)

{ Scanner sc=new Scanner(System.in);

System.out.println(“Please enter ist prime no P”);

p =sc.nextInt();

System.out.println(“Please enter 2nd prime no q”);

q =sc.nextInt();

n=p*q;

System.out.println(“p*q = n “+n);

//Totient of n

tn=(p-1)*(q-1);

System.out.println(“Totation of tn(pq) = “+tn);

int k=tn;

for (int i=1; i<tn; i++)=”” <p=””> { int fi= (int)(Math.pow(2, i)+1);</tn;>

l=fi;

while (tn % fi !=0)

{

int r = (tn % fi);

tn = fi;

fi = r;

} if (fi==1)

System.out.println(“GCD Of”+”[“+k+”,”+l+”] is”+fi+”Recommended for you”);

}

System.out.println(“So please use “+l+” as e”);

System.out.println(“Enter number to exponent e”);

e=sc.nextInt();

for (int d=1;d<k;d++) <p=””> if ((e*d)%k==1)</k;d++)>

System.out.println(“The value of e^-1 mod n= d ==”+d);

System.out.println(“Enter the above valu of d”);

int d1=sc.nextInt();

System.out.println(“Enter number to encrypt”);

m=sc.nextInt();

//encryption function is c = (m ^ e)/n;

double encryption = (Math.pow(m, e)% n);

System.out.println(“encryption Key =”+ encryption);

System.out.println(“The value of d= e^-1 mod n ==”+d1);

double decrypt = (Math.pow(encryption, d1) % n);

System.out.println(encryption +”to decryption is =” + decrypt);

OUT PUT

Please enter ist prime no P

5

Please enter 2nd prime no q

7

p*q = n 35

Totation of tn(pq) = 24

GCD Of[24,5] is1Recommended for you

GCD Of[24,9] is1Recommended for you

So please use 9 as e

Enter number to exponent e

5

The value of e-1 mod n= d ==5

Enter the above value of d

5

Enter number to encrypt

9

encryption Key =4.0

The value of d= e-1 mod n ==5

4.0to decryption is =9.0

The above java code works fine on small prime integers with small exponential power and small value of d (multiplicative inverse).

OUT PUT

Please enter ist prime no P

61

Please enter 2nd prime no q

53

p*q = n 3233

Totation of tn(pq) = 3120

GCD Of[3120,17] is1Recommended for you

So please use 17 as e

Enter number to exponent e

17

The value of e-1 mod n= d ==2753

Enter the above value of d

2753

Enter number to encrypt

65

encryption Key =887.0

The value of d= e-1 mod n ==2753

887.0to decryption is =NaN

The same java code work perfect on big numbers but there you need different data types to adjust the output value the error NaN means data type mismatch.

Practically Implementation

An “RSA operation” whether encrypting, decrypting, signing, or verifying is fundamentally a modular exponentiation. This computation is executed with a sequence of modular multiplications.

In practical uses, it is general to select a small public exponent for the public key. In reality, entire group of users preserve to use the matching public exponent, every one through a different modulus. However there are few boundaries on the prime factors of the modulus when the public exponent is set. For the reason of this it creates encryption more rapidly than decryption and verification quicker than signing. Through the typical modular power algorithms used to put into practice the RSA algorithm, public-key operations takeO(k2) steps, private-key operations take O(k3) steps, and key generation takesO(k4) steps, wherekis the number of bits in the modulus. (RSA 2010)

Is RSA Work Secure?

This scheme is not fully secure on the basses of following attacks

  • Elementary attack
  • Low private exponent attack
  • Low private exponent attack
  • Implementation attack

Boneh et al Homomorphic Encryption

(Boneh D, 1999) examined the RSA cryptosystem, was original exposed in the 1977-1978 topic of “Scientific American”. The cryptosystem is mainly generally in practice for offering confidentiality and certification validity of digital data. In those days RSA was positioned in many big business organizations. It is used by web servers and browsers to safe web transfer, it is used to make sure confidentiality and legitimacy of correspondence, it is used to safe remote login phase, and it is at the heart of electronic credit-card payment method. However, RSA is commonly take part in meanings anywhere safety of digital data is on risk.

In view of the fact of first publication, the RSA scheme evaluates meant for weakness through a lot of examiners. However since 1977 to 1999, examiner have direct to a many interesting attacks but not any of them is critical. They typically demonstrate the risk of offensive use of RSA. Definitely, protected execution of RSA is a nontrivial job.

Twenty years of research into inverting the RSA service created various perceptive attacks, other than no shocking attack has ever been discovered. The attacks exposed so far mostly demonstrate the drawbacks which one can avoid once applying RSA. Currently comes into view that correct applications can offer assurance to afford protection in the electronic globe.

Openattacks on RSA scheme:

Chosen chipper attack is very famous in cryptography in it attacker gathered information in pieces and then process it. This attack claimed against RSA in 1998 by Y. Desmedt and A. M. Odlyzko.

According to RSA choose two prime numbers to calculate n then use φ(n) for modulus in encryption and decryption but if any enemy used brute force attack on their public key (N, e) to find the factorization and as well as their φ(n). On the other hand if we assume that only big prime number only allowed in RSA then it will affect the speed of the scheme because performance depend on n-bit key.

While encrypting with not big encryption supporter e= 3 and small values of them like mAnother attack was if sender send a plain clear message to e or more beneficiary after encrypted and the recipients distribute the similar exponente, except differentintegers p,q, andn, in that case it is simple to decode the plaintext using theChinese remainder theorem.HåstadJ become aware of that, this attack is achievable still if the plaintexts are not identical, however the attacker recognize a linear relation among them.Afterward Don Coppersmith enhanced this attack which was low exponent.

RSA has the property that the multiplication of two encrypted text is the same to the encryption of the product of the individual plaintexts. That is“” since of this multiplicative property achosen ciphertext attackis possible. For example an attacker, who needs to identify the decryption of a ciphertextc=me(modn)possibly will request the owner of the private key to decrypt an innocent appearing ciphertextc’ =re c (modn)for random rselected by the attacker. For the reason that of the multipli

 

Most Used Categories


Recommendation
With Our Resume Writing Help, You Will Land Your Dream Job
Resume Writing Service, Resume101
Trust your assignments to an essay writing service with the fastest delivery time and fully original content.
Essay Writing Service, EssayPro
Nowadays, the PaperHelp website is a place where you can easily find fast and effective solutions to virtually all academic needs
Universal Writing Solution, PaperHelp
Professional Custom
Professional Custom Essay Writing Services
In need of qualified essay help online or professional assistance with your research paper?
Browsing the web for a reliable custom writing service to give you a hand with college assignment?
Out of time and require quick and moreover effective support with your term paper or dissertation?