Data-ware-housing
Question 1 |
The process of removing details from a given state representative is called___.
Extraction | |
Mining | |
Selection | |
Abstraction |
Question 1 Explanation:
Abstraction: the process of removing physical, spatial, or temporal details or attributes in the study of objects or systems to focus attention on details of greater importance; it is similar in nature to the process of generalization.
Data extraction is the act or process of retrieving data out of (usually unstructured or poorly structured) data sources for further data processing or data storage (data migration).
Data mining is a process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
Data extraction is the act or process of retrieving data out of (usually unstructured or poorly structured) data sources for further data processing or data storage (data migration).
Data mining is a process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
Question 2 |
Downflow is the process associated with __________ and back up of data in a warehouse.
packaging | |
archiving | |
extraction | |
loading |
Question 2 Explanation:
→ Downflow is the process associated with archiving and backup of data in a warehouse.
→ Archived files (or) logs are crucial for recovery when no data can be lost, because they constitute a record of changes to the database.
Avantages using archiving
1. The database can be completely recovered from both instance and media failure.
2. The user can perform backups while the database is open and available for use.
3. The database can be completely recovered from both instance and media failure.
→ Archived files (or) logs are crucial for recovery when no data can be lost, because they constitute a record of changes to the database.
Avantages using archiving
1. The database can be completely recovered from both instance and media failure.
2. The user can perform backups while the database is open and available for use.
3. The database can be completely recovered from both instance and media failure.
Question 3 |
_______ is subject oriented, integrated, time variant, nonvolatile collection of data in support of management decisions.
Data mining | |
Web mining | |
Data warehouse | |
Database Management System |
Question 3 Explanation:
→ Data warehouse is subject oriented, integrated, time variant, nonvolatile collection of data in support of management decisions.
→ A data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, and is considered a core component of business intelligence.
→ DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place that are used for creating analytical reports for workers throughout the enterprise.
→ A data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis, and is considered a core component of business intelligence.
→ DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place that are used for creating analytical reports for workers throughout the enterprise.
Question 4 |
Consider the following two statements :
(A) Data scrubling is a process to upgrade the quality of data, before it is moved into Data warehouse.
(B) Data scrubling is a process of rejecting data from data warehouse to create indexes.
Which one of the following options is correct ?
(A) Data scrubling is a process to upgrade the quality of data, before it is moved into Data warehouse.
(B) Data scrubling is a process of rejecting data from data warehouse to create indexes.
Which one of the following options is correct ?
(A) is true, (B) is false. | |
(A) is false, (B) is true. | |
Both (A) and (B) are false. | |
Both (A) and (B) are true. |
Question 4 Explanation:
→ Data scrubbing is an error correction technique that uses a background task to periodically inspect main memory or storage for errors, then correct detected errors using redundant data in the form of different checksums or copies of data.
→ Data scrubbing reduces the likelihood that single correctable errors will accumulate, leading to reduced risks of uncorrectable errors.
→ Data scrubbing reduces the likelihood that single correctable errors will accumulate, leading to reduced risks of uncorrectable errors.