






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
its based on jnvu jodhpur syllabus
Typology: Thesis
1 / 12
This page cannot be seen from the preview
Don't miss anything!
Relational Database Management System(RDBMS) The relational model is the basis for a relational database management system (RDBMS). Essentially, an RDBMS moves data into a database, stores the data, and retrieves it so that it can be manipulated by applications. An RDBMS distinguishes between the following types of operations: ■ Logical operations In this case, an application specifies what content is required. For example, an application requests an employee name or adds an employee record to a table. ■ Physical operations In this case, the RDBMS determines how things should be done and carries out the operation. For example, after an application queries a table, the database may use an index to find the requested rows, read the data into memory, and perform many other steps before returning a result to the user. The RDBMS stores and retrieves data so that physical operations are transparent to database applications. Oracle Database is an RDBMS. An RDBMS that implements object-oriented features such as user-defined types, inheritance, and polymorphism is called an object-relational database management system (ORDBMS). Oracle Database has extended the relational model to an object-relational model, making it possible to store complex business models in a relational database.
The Oracle relational database management system (RDBMS) provides an open, comprehensive, integrated approach to information management
The database has physical structures and logical structures. Because the physical and logical structures are separate, the physical storage of data can be managed without affecting access to logical storage structures. For example, renaming a physical database file does not rename the tables whose data is stored in this file.
Physical Storage Structures The physical database structures are the files that store the data. When you execute the SQL command CREATE DATABASE, the following files are created: ■ Data files: Every Oracle database has one or more physical data files, which contain all the database data. The data of logical database structures, such as tables and indexes, is physically stored in the data files. ■ Control files : Every Oracle database has a control file. A control file contains metadata specifying the physical structure of the database, including the database name and the names and locations of the database files. ■ Online redo log files : Every Oracle Database has an online redo log, which is a set of two or more online redo log files. An online redo log is made up of redo entries (also called redo records), which record all changes made to data. Many other files are important for the functioning of an Oracle database server. These files include parameter files and diagnostic files. Backup files and archived redo log files are offline files important for backup and recovery.
Logical Storage Structures This section discusses logical storage structures. The following logical storage structures enable Oracle Database to have fine-grained control of disk space use ■ Data blocks : At the finest level of granularity, Oracle Database data is stored in data blocks. One data block corresponds to a specific number of bytes on disk. ■ Extents : An extent is a specific number of logically contiguous data blocks, obtained in a single allocation, used to store a specific type of information. ■ Segments : A segment is a set of extents allocated for a user object (for example, a table or index), undo data, or temporary data.
■ Tablespaces : A database is divided into logical storage units called tablespaces. A tablespace is the logical container for a segment. Each tablespace contains at least one data file.
Oracle Database Processes A process is a mechanism in an operating system that can run a series of steps. Some operating systems use the terms job, task, or thread. For the purpose of this discussion, a thread is equivalent to a process. An Oracle database instance has the following types of processes: ■ Client processes: These processes are created and maintained to run the software code of an
The above figure shows the architecture of two-tier. Here the communication is one to one. Let us see the concept of two tier with real time application. For example now we have a need to save the employee details in database. The two tiers of two-tier architecture is Database (Data tier) Client Application (Client tier) So, in client application the client writes the program for saving the record in SQL Server and thereby saving the data in the database. Advantages: Understanding and maintenances is easier. Disadvantages: Performance will be reduced when there are more users.
2. Three-tier architecture comes with a client tier , a middle tier and a database tier. The client tier is for communication between the user and the system. The middle tier communicates with the other two tiers and the database tier manages the whole data. In 3-tier, the client (generally a browser in Web communication) is known as thin client as client does not do anything other than taking data from client and forwarding to the middle tier and printing the response to client. For more security of database, a persistence manager can be placed on database server. Persistence manager controls the flow of data between application server and database. That is, application server cannot communicate directly with the database (now you may call it as 4 - tier architecture ).
a) Advantages Proved the possibility of integration of servers where one server can communicate with other servers. More security for database as it is separated from other two tiers. Permits scalability as each layer is run on different systems. More flexible as an additional tier for integration logic. Shifting the roles and relationships among business houses is easy where frequently new industries added or old industries are deleted like suppliers and retailers. It meets the necessities of large-scale applications of Internet and Intranet. It is easier to deploy on network and manage later. Modification is possible without affecting the other tiers. It is fast in communication than 2-tier. Performance is higher than 2-tier. b) Disadvantages There may be loss of performance in Web communication. Issues of integration with other three tier systems.
many CPU architectures. When the nature of the resource requires mutual exclusion between threads, the database runtime claims resources only for a short time. Thus the database locks use a low-overhead spinlock-based mechanism that protects the resource. Efficient memory use is often a key to application performance, especially in the kernel where the nonpaged memory pool is limited and frequent paging could increase kernel latencies. To address this, in-memory databases often use a number of custom allocation algorithms to take advantage of problem-specific allocation patterns, such as infrastructure for the data layout, internal database runtime heap management, and so on. The kernel-mode stack is a limited storage area that is often used for information that is passed between functions, as well as for local variable storage. Running out of stack space causes the OS to crash. Therefore, the database runtime integrated with the kernel module and other drivers must watch stack usage. It must never allocate large aggregate structures on the stack, and avoid deeply nested or recursive calls. If recursion must be used, the number of recursive calls must be strictly limited. Not all of the standard libraries (C and especially C++) are present in kernel mode. Moreover, versions of standard libraries for use in kernel mode are not necessarily the same as those in user mode, as they are written to conform to kernel-mode requirements. Kernel-mode implementations of standard libraries usually have limited functionality and are constrained by other properties of kernel mode. (The eXtremeDB-KM database runtime does not use the C runtime. For instance, instead of relying on the C runtime for memory management, the database replaces those functions with custom allocators.)
The database kernel module implements kernel-mode data storage and provides the API to manipulate the data. The module is integrated with the eXtremeDB database runtime, which is responsible for providing "standard" database functionality such as transaction control, data access coordination and locking, lookup algorithms, and the like. Example 1 presents the data layout using eXtremeDB Data Definition Language syntax.
System Global Area (SGA ) The System Global Area (SGA) is a group of shared memory structures, known as SGA components, that contain data and control information for one Oracle Database instance. The SGA is shared by all server and background processes. Examples of data stored in the SGA include cached data blocks and shared SQL areas. Component Description Database buffer cache Before data stored in the database can be queried or modified, it must be read from a disk and stored in the buffer cache. All user processes connected to the database share access to the buffer cache. For optimal performance, the buffer cache should be large enough to avoid frequent disk I/O operations.
Shared pool The shared pool caches information that is shared among users:SQL statements that can be reused Information from the data dictionary such as user account data, table and index descriptions, and privileges Stored procedures, which are executable code that is stored in the database Redo log buffer This buffer improves performance by caching redo information until it can be written to the physical online redo log files stored on disk. Redo information and online redo log files are discussed in "About Online Redo Log Files". Large pool This optional area is used to buffer large I/O requests for various server processes. Java pool The Java pool is an area of memory that is used for all session-specific Java code and data within the Java Virtual Machine (JVM). Streams pool The Streams pool is an area of memory that is used by the Oracle Streams feature. For more information about Oracle Streams, see Oracle Streams Concepts and Administration. Result cache The result cache buffers query results. If a query is run for which the results are stored in the result cache, then the database returns the query results from the result cache instead of rerunning the query. This SGA component speeds the execution of frequently run queries.
Program Global Area (PGA) The PGA is memory specific to an operating process or thread that is not shared by other processes or threads on the system. Because the PGA is process-specific, it is never allocated in the SGA.
The PGA is a memory heap that contains session-dependent variables required by a dedicated or shared server process. The server process allocates memory structures that it requires in the PGA.
An analogy for a PGA is a temporary countertop workspace used by a file clerk. In this analogy, the file clerk is the server process doing work on behalf of the customer (client process). The clerk clears a section of the countertop, uses the workspace to store details about the customer request and to sort the folders requested by the customer, and then gives up the space when the work is done.
LGWR writes one contiguous portion of the buffer to disk. LGWR writes: A commit record when a user process commits a transaction Redo log buffers o Every three seconds o When the redo log buffer is one-third full o When a DBW n process writes modified buffers to disk, if necessary
The process monitor (PMON) performs process recovery when a user process fails. PMON is responsible for cleaning up the database buffer cache and freeing resources that the user process was using. For example, it resets the status of the active transaction table, releases locks, and removes the process ID from the list of active processes. PMON periodically checks the status of dispatcher and server processes, and restarts any that have stopped running (but not any that Oracle has terminated intentionally). PMON also registers information about the instance and dispatcher processes with the network listener. Like SMON, PMON checks regularly to see whether it is needed and can be called if another process detects the need for it.
The archiver process (ARCn) copies redo log files to a designated storage device after a log switch has occurred. ARC n processes are present only when the database is in ARCHIVELOG mode, and automatic archiving is enabled.
An Oracle instance can have up to 10 ARC n processes (ARC0 to ARC9). The LGWR process starts a new ARC n process whenever the current number of ARC n processes is insufficient to handle the workload. The alert log keeps a record of when LGWR starts a new ARC n process.
If you anticipate a heavy workload for archiving, such as during bulk loading of data, you can specify multiple archiver processes with the initialization parameter LOG_ARCHIVE_MAX_PROCESSES. The ALTER SYSTEM statement can change the value of this parameter dynamically to increase or decrease the number of ARC n processes. However, you do not need to change this parameter from its default value of 1, because the system determines how many ARC n processes are needed, and LGWR automatically starts up more ARC n processes when the database workload requires more.
Every SQL Server database has two operating system files: a data file and a log file. Data files contain data and objects such as tables, indexes, stored procedures, and views. Log files contain
the information that is required to recover all transactions in the database. Data files can be grouped together in file groups for allocation and administration purposes. SQL Server 2005 databases have three types of files:
Primary data files : The primary data file is the starting point of the database and points to the other files in the database. Every database has one primary data file. The recommended file name extension for primary data files is .mdf.
Secondary data files : Secondary data files make up all the data files, other than the primary data file. Some databases may not have any secondary data files, while others have several secondary data files. The recommended file name extension for secondary data files is .ndf.
Log files : Log files hold all the log information that is used to recover the database. There must be at least one log file for each database, although there can be more than one. The recommended file name extension for log files is .ldf.
Oracle Database has a control file , which is a small binary file that records the physical structure of the database. The control file includes: The database name Names and locations of associated datafiles and redo log files The timestamp of the database creation The current log sequence number Checkpoint information The control file must be available for writing by the Oracle Database server whenever the database is open. Without the control file, the database cannot be mounted and recovery is difficult. The control file of an Oracle Database is created at the same time as the database. By default, at least one copy of the control file is created during database creation. On some operating systems the default is to create multiple copies. You should create two or more copies of the control file during database creation. You can also create control files later, if you lose control files or want to change particular settings in the control files.
Redo logs are transaction journals. Each transaction is recorded in the redo logs. Redo logs are used in a serial fashion with each transaction queuing up in the redo log buffers and being written one at a time into the redo logs. Redo logs as a general rule should switch about every thirty minutes. However, you may need to adjust the time up or down depending on the importance of your data.