Data processing - Wikipedia
Information is the processed data which may be used “as is” or may be put to use along with more data or information. The receiver of information takes actions. Data processing is, generally, "the collection and manipulation of items of data to produce meaningful information." In this sense it can be considered a subset of information processing, "the For example, in the Data Processing Management Association (DPMA) changed its name to the Association of Information. Information processing systems include business software, operating systems, computers, networks and mainframes. Whenever data needs to be transferred or .
Each of these stages plays an important role in the collection, analysis and distribution actions performed by a computer system. Some experts believe the input process itself could be divided into as many as three stages: However, the general view of the input stage is that data is input into a system using some form of an input device.
An input device is able to collect data at its source or point of measurement. The source of data entered into the system by a human is through a keyboard, microphone or perhaps even the movement of eyes or another body part. Other forms of input devices, such as thermometers, sensors and clocks, also meet the general definition of input devices. The input stage of IPOS could also be referred to as the encoding stage. The processing agent is typically some form of software or firmware, with a specific action taken on a particular type of data.
In a portable or desktop computer, it is common for the processing agent to be active even before the data enters. In fact, it is also common for the processing software to request data and guide its input process. Disadvantages of the database approach are presented in Table 2.
At its basic form, metadata is the labels and categories placed on data to make analysis easier. For instance, the metadata for a book would contain—not the book itself—but the author, language, and ISBN of the book.
Most people encounter and manipulate metadata when searching for subjects on the Internet. The bits of information pertaining to Web sites that most search engines list is all metadata, and it is sifted through by the searcher to find pertinent data. In a company's data governance system, metadata is used to classify and control the data available. When analysts chose and manipulate large data groups, they do so through the information collected from metadata.
The file type, the name, the timestamp, the physical and electronic location, the owner, and the access permissions are all common types of metadata found in company file systems. A hierarchical database model is one in which the data are organized in a top-down or inverted tree-like Table 1 Advantages of the Database Approach Advantages Explanation Reduced data redundancy The database approach can reduce or eliminate data redundancy.
Data is organized by the DBMS and stored in only one location. This results in more efficient utilization of system storage space. Improved data Integrity With the traditional approach, some changes to data integrity were not reflected in all copies of the data kept in separate files. This is prevented with the database approach because there are not separate files that contain copies of the same piece of data.
Easier modification and updating With the database approach, the DBMS coordinates and updating updates and data modifications. Programmers and users do not have to know where the data is physically stored.
Data is stored and modified once. Modification and updating is also easier because the data is stored at only one location.
What is Information Processing? - Definition from Techopedia
Data and program independence The DBMS organized the data independently of the independence application program. With the database approach, the application program is not affected by the location or type of data. Introduction of new data types not relevant to a particular application does not require the rewriting of that application to maintain compatibility with the data file.
Better access to data and information Most DBMSs have software that makes it easier to data and information access and retrieve data from a database, in most cases, simple commands can be given to get important information. Relationships between records can be more easily investigated and exploited, and applications can be more easily combined. Standardization of data access A primary feature of the database approach is a of data access standardized, uniform approach to database access.
This means that the same overall procedures are used by all application programs to retrieve data information. A framework for program Standardized database access procedures can mean program more standardization of program development. Because programs go through the DBMS to gain access to data in the database, standardized database access can provide a consistent framework for program development.
In addition, each application program need only address the DBMS, not the actual data files, reducing application development time. Better overall protection of the data The use of and access to centrally located data is protection of the easier to monitor and control.
Security codes and data passwords can ensure that only authorized people have access to particular data and information in the database, and ensure privacy. Shared data and information resources development The cost of hardware, software, and personnel can information be spread over a large number of applications and users.
This is a primary feature of DBMS. This type of model is best suited for situations where the logical relationships between data can be properly represented with the one-parent-many-children approach. A network model is an extension of the hierarchical database model. Specialized staff Additional specialized staff and operating personnel may be needed to implement and coordinate the use of the database.
It should be noted, however, that some organizations have been able to implement the database approach with no additional personnel.
Increased vulnerability Even though databases offer better security because security measures can be concentrated on one system, they also may make more data accessible to the trespasser if security is breached.
In addition, if for some reason there is a failure in the DBMS, multiple application programs are affected. A relational model describes data using a standard tabular format. All data elements are placed in two-dimensional tables called relations, which are the equivalent of files. Data inquiries and manipulations can be made via columns or rows given specific criteria. Network database models tend to offer more flexibility than hierarchical models.
However, they are more difficult to develop and use because of relationship complexity. The relational database model offers the most flexibility and was very popular during the early s. DBMSs are classified by the type of database model they support. A relational DBMS would follow the relational model, for example. The functions of a DBMS include data storage and retrieval, database modifications, data manipulation, and report generation. A data definition language DDL is a collection of instructions and commands used to define and describe data and data relationships in a particular database.
File descriptions, area descriptions, record descriptions, and set descriptions are terms the DDL defines and uses. A data dictionary also is important to database management.Data Processing Cycle
This is a detailed description of the structure and intended content in the database. For example, a data dictionary might specify the maximum number of characters allowed in each type of field and whether the field content can include numbers, letters, or specially formatted content such as dates or currencies.
Data dictionaries are used to provide a standard definition of terms and data elements, assist programmers in designing and writing programs, simplify database modifications, reduce data redundancy, increase data reliability, and decrease program development time.
The choice of a particular DBMS typically is a function of several considerations. Economic cost considerations include software acquisition costs, maintenance costs, hardware acquisition costs, database creation and conversion costs, personnel costs, training costs, and operating costs.
Most DBMS vendors are combining their products with text editors and browsers, report generators, listing utilities, communication software, data entry and display features, and graphical design tools.
Data processing system
Consequently, those looking for a total design system have many choices. Most data governance systems have automatic data-mining programs designed to fit the analysts' needs. These programs sort through and summarize data according to certain parameters. If a company wanted to cut costs in manufacturing, for instance, a data-mining activity would be done to search for figures and facts concerning manufacturing, collect that data into different categories such as supply costs and worker costs, and finally transmit the categorized data to the proper people.
To identify patterns, data mining is controlled by strict guidelines. For instance, one analysis might require a data mine of all associative information events that connected to each other through some type of relationship. Sequential patterns are another type of data mining parameter, where data is found in events that naturally lead to one another, such as the supply chain. Some data mining activities focus on classification and the search for new patterns in the available data.
Still more data mining might be done to predict trends or outcomes of particular events.
Although data mining is an immensely popular tool, it does have blind spots. For the data mining to be useful, a skilled analyst must set clear parameters and interpret the data correctly.
Data mining does not make value judgments or attribute importance. Data clustering is a subset of data mining, and is usually performed first in a datamining activity. It is an automatic function, based on mathematic principles, that groups data into similar categories. In this manner, data are stored in another database for analyzing trends and new relationships. Consequently, the data warehouse is not the live, active system, but it is updated daily or weekly. Smaller parts of this database could be warehoused for further analysis to avoid slowing down the VLDB.
Such a database may have originated as a public database, but typically once the company begins adding or removing information it is considered a private database. By contrast, public databases are those names, addresses, and data that are complied for resale in the list rental market. This is publicly available data i. However, a new trend is combining features of the two approaches.
Cooperative databases are compiled by combining privately held response files of participating companies so that costs are shared.
DATA PROCESSING IN COMPUTER
Many consider this to be a future trend, such that virtually all catalog marketers, for example, would use cooperative databases. GIS involves combining demographic, environmental, or other business data with geographic data.
This can involve road networks and urban mapping, as well as consumer buying habits and how they relate to the local geography. Output is often presented in a visual data map that facilitates the discovery of new patterns and knowledge.
Customer Resource Management CRM is another area where data process and data management is deeply involved. CRM is a set of methodologies and software applications for managing the customer relationship. CRM provides the opportunity for management, salespeople, marketers, and potentially even customers, to see sufficient detail regarding customer activities and contacts.
This allows companies to provide other possible products or useful services, as well as other business options.