Data Pump takes the old export and import utilities one step further, you can have total control over the job running (stop it, pause it, check it, restart it). Data pump is a server side technology and it can transfer large amounts of data very quickly using parallel streams to achieve maximum throughput, they can be 15-45% faster than the older import/export utilities. Advantages using data pump are

  • ability to estimate jobs times
  • ability to restart failed jobs
  • perform fine-grained object selection
  • monitor running jobs
  • directly load a database from a remote instance via the network
  • remapping capabilities
  • improved performance using parallel executions

A couple of notes is that you cannot export to a tape device only to disk, and the import will only work with version of oracle 10.1 or greater. Also remember that the expdp and impdp are command line tools and run from within the Operating System.

Data Pump Uses

You can use data pump for the following

  • migrating databases
  • copying databases
  • transferring oracle databases between different operating systems
  • backing up important tables before you change them
  • moving database objects from one tablespace to another
  • transporting tablespace’s between databases
  • reorganizing fragmented table data
  • extracting the DDL for tables and other objects such as stored procedures and packages

Data Pump components

Data pump technology consists of three major components

  • dbms_datapump – the main engine for driving data dictionary metadata loading and unloading
  • dbms_metadata – used to extract the appropriate metadata
  • command-line – expdp and impdp are the import/export equivalents

Data Access methods

Data pump has two methods for loading data, direct path or external table path you as a dba have no control with what data pump uses, normally simple structures such as heap tables without triggers will use direct path more complex tables will use the external path, oracle will always try and use the direct-path method.

Direct Path  

bypasses the database buffer cache and writes beyond the high water mark when finished adjusts the high water mark, No undo is generated and can switch off redo as well, minimal impact to users as does not use SGA. Must disable triggers on tables before use.

External Path  

Uses the database buffer cache acts as a SELECT statement into a dump file, during import reconstructs statements into INSERT statements, so whole process is like a normal SELECT/INSERT job. Both undo and redo are generated and uses a normal COMMIT just like a DML statement would.

In the following cases oracle will use the external path if any of the below are in use

  • clustered tables
  • active triggers in the table
  • a single partition in a table with a global index
  • referential integrity constraints
  • domain indexes on LOB columns
  • tables with fine-grained access control enabled in the insert mode
  • tables with BFILE or opaque type columns

Data Pump files

You will use three types’s of files when using data pump, all files will be created on the server.

  • dump files – holds the data and metadata
  • log files – the resulting output from the data pump command
  • sql files – contain the DDL statements describing the objects included in the job but can contain data
  • Master data pump tables – when using datapump it will create tables within the schema, this is used for controlling the datapump job, the table is removed when finished.

Data Pump privileges

In order to advance features of data pump you need exp_full_database and imp_full_database privileges.

How Data Pump works

The Master Control Process (MCP), has the process name DMnn, only one master job runs per job which controls the whole Data Pump job, it performs the following

  • create jobs and controls them
  • creates and manages the worker processes
  • monitors the jobs and logs the process
  • maintains the job state and restart information in the master table (create in the users schema running the job)
  • manages the necessary files including the dump file set

The master process creates a master table which contains job details (state, restart info), this table is created in the users schema who is running the Data Pump job. Once the job has finished it dumps the table contents into the data pump file and deletes the table. When you import the data pump file it re-creates the table and reads it to verify the correct sequence in which the it should import the various database objects.

The worker process is named DWnn and is the process that actually performs the work, you can have a number of worker process running on the same job (parallelism). The work process updates the master table with the various job status.

The shadow process is created when the client logs in to the oracle server it services data pump API requests, it creates the job consisting of the master table and the master process.

The client processes are the expdp and impdp commands.

Running Data Pump

You can either run via a command line specifying options or use a parameter file, there are many options to Data Pump so it would be best to check out the Oracle documentation, I have given a few examples below

Exporting

database ->

expdp vallep/password directory=datapump full=y dumpfile=data.dmp filesize=2G parallel=2 logfile=full.log

Note: increase the parallel option based on the number of CPU’s you have

schema ->    

expdp sys/password schemas=testuser dumpfile=data.dmp logfile=schema.log

table  ->    

expdp vallep/password tables=accounts,employees dumpfile=data.dmp content=metadata_only

tablespace  ->   

expdp vallep/password tablespaces=users dumpfile=data.dmp logfile=tablespace.log

Importing

database  ->

impdp system/password full=y dumpfile=data.dmp nologfile=y

schema change  ->

impdp system/password schemas=’HR’ remap_schema=’HR:HR_TEST’ content=data_only
impdp system/passwd remap_schema=’TEST:TEST3’ tables=test log=… dumpfile=… directory=…