前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >误操作怎么办?试试这个神器-Log Miner

误操作怎么办?试试这个神器-Log Miner

作者头像
bisal
发布于 2021-09-06 07:51:09
发布于 2021-09-06 07:51:09
1K00
代码可运行
举报
运行总次数:0
代码可运行

LogMiner可以用来分析在线/redo日志,也可以用来分析离线日志文件/归档日志,是由一组PL/SQL包和一些动态视图组成,而且是免费的。

参考了老杨的《Logminer:oracle人为误操作之恢复神器》,在11g尝试一次。但是请注意12c以上,具体操作过程,应该有改动,有机会得研究下。

如果没安装LogMiner所需的两个包dbms_logmnr和dbms_logmnr_d,则可执行这两个脚本,dbmslmd.sql和dbmslm.sql,路径是@?/rdbms/admin。

dbmslmd.sql的内容,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
Rem
Rem $Header: dbmslmd.sql 13-feb-2006.15:15:25 ajadams Exp $
Rem
Rem dbmslmd.sql
Rem
Rem Copyright (c) 1998, 2006, Oracle. All rights reserved.  
Rem
Rem    NAME
Rem      dbmslmd.sql - DBMS Logminer Dictionary package specification 
Rem      for DBMS_LOGMNR_D 
Rem
Rem    DESCRIPTION
Rem      This file contains the logminer package specification for DBMS_LOGMNR_D 
Rem
Rem    NOTES
Rem
Rem    MODIFIED   (MM/DD/YY)
Rem    ajadams     02/13/06 - create synonym 
Rem    abrown      09/13/05 - bug 3776830: unwind dictionary 
Rem    jnesheiw    02/17/05 - Bug 4028220 Relocated change history and logmnr
Rem                           metada creation to prvtlmd.sql
Rem    doshaugh    04/14/98 - Created
Rem
Rem
Rem  PUBLIC PROCEDURES
Rem
Rem     BUILD (FileName, FileLocation, Options)
Rem 
Rem     SET_TABLESPACE(NewTablespace);
Rem
Rem  PUBLIC CONSTANTS
Rem
Rem     STORE_IN_FLAT_FILE
Rem
Rem     STORE_IN_REDO_LOGS
Rem
Rem     MARK_SAFE_MINE_POINT
Rem     
Rem
Rem


-- --------------------------------------------------------------
--
CREATE or REPLACE PACKAGE dbms_logmnr_d AS
--
--    PACKAGE NAME
--      dbms_logmnr_d
--
--    DESCRIPTION
--      This package contains Logminer Dictionary related procedures.
--      "build" is used to gather the logminer dictionary.
--
--      "set_tablespace" is used to alter the default tablespace of
--      Logminer tables.
--
--      BUILD
--      The logminer dictionary can be gathered
--      into a flat file (Logminer V1 behavior) or it can be gathered
--      into the redo log stream.
--
--      When creating a Flat File dictionary the procedure queries the
--      dictionary tables of the current database and creates a text based
--      file containing their contents. Each table is represented by
--      "pseudo" SQL statements. A description of the columns in a 
--      table is created by a "CREATE_TABLE" line (one statement for
--      table). It contains the name, datatype and length for each 
--      column. A "INSERT_INTO" statement is created for each row in a 
--      selected table. It contains the values for each row. The file
--      is created in preparation of future analysis of databases
--      log files using the logminer tool.
--
--      When gathering the system dictionary into the logstream the procedure
--      queries the dictionary tables inserting the results into a special
--      set of Logminer Gather tables (SYS.LOGMNRG_*).  A side effect of
--      each query is that the resultant inserts cause redo to be generated.
--      Down stream processing can mine this redo to determine the contents
--      of this system's system dictionary at the time this procedure was
--      executed.
-- 
--      NOTE:  Database must be in "Archivelog Mode" and supplemental logging
--             must be enabled for this procedure to run
--
--      BUILD INPUTS
--      dictionary_filename - name of the dictionary file
--      dictionary_location - path to file directory
--      options - To explicitly indicate flat file or log stream destination.
-- 
--      BUILD EXAMPLE1
--      Creating a dictionary file as:
--                   /usr/ora/dict.ora
--      Complete syntax, typed all on one line:
--
--      SQL> execute dbms_logmnr_d.build('dict.ora',
--                                       '/usr/ora',
--                                       DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
--
--      BUILD EXAMPLE2
--      Creating a dictionary file as:
--                   /usr/ora/dict.ora
--      Logminer V1 syntax.
--
--      SQL> execute dbms_logmnr_d.build('dict.ora', '/usr/ora');
--
--      BUILD EXAMPLE3
--      Gathering a dictionary into the log stream
--      Complete syntax, typed all on one line:
--
--      SQL> execute dbms_logmnr_d.build('', '',
--                                          DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
--
--      BUILD NOTES
--      The dictionary gather should be done after all dictionary
--      changes to a database and prior to the creation of any log
--      files that are to be analyzed.
--
--
--      SET_TABLESPACE
--      By default all Logminer tables are created to use the SYSAUX
--      tablespace.  All users will find it desirable to alter Logminer
--      tables to employ an alternate tablespace.  Use this routine to
--      recreate all Logminer tables in an alternate tablespace.
--
--      SET_TABLESPACE INPUTS
--      new_tablespace         - a string naming a preexistant tablespace.
--


STORE_IN_FLAT_FILE CONSTANT INTEGER := 1;
STORE_IN_REDO_LOGS CONSTANT INTEGER := 2;
MARK_SAFE_MINE_POINT  CONSTANT INTEGER := 8;


PROCEDURE  build
                (dictionary_filename IN VARCHAR2 DEFAULT '',
                 dictionary_location IN VARCHAR2 DEFAULT '',
                 options IN NUMBER DEFAULT 0);


--
--
PROCEDURE set_tablespace( new_tablespace IN VARCHAR2 );
--
--
END dbms_logmnr_d; -- End Definition of package
/


CREATE OR REPLACE PUBLIC SYNONYM dbms_logmnr_d FOR sys.dbms_logmnr_d;

dbmslm.sql的内容

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
create or replace PACKAGE dbms_logmnr IS


  --------------------
  -- OVERVIEW
  -- 
  --   This package contains the procedures used by LogMiner ad-hoc query
  --   interface that allows for redo log stream analysis. 
  --   There are three procedures and two functions available to the user:
  --   dbms_logmnr.add_logfile()    : to register logfiles to be analyzed
  --   dbms_logmnr.remove_logfile() : to remove logfiles from being analyzed
  --   dbms_logmnr.start_logmnr()   : to provide window of analysis and
  --                                  meta-data information
  --   dbms_logmnr.end_logmnr()     : to end the analysis session
  --   dbms_logmnr.column_present() : whether a particular column value
  --                                  is presnet in a redo record
  --   dbms_logmnr.mine_value()     : extract data value from a redo record
  --


  ---------------------------
  --  PROCEDURE INFORMATION:
  --  #1 dbms_logmnr.add_logfile(): 
  --     DESCRIPTION:
  --       Registers a redo log file with LogMiner. Multiple redo logs can be
  --       registered by calling the procedure repeatedly. The redo logs 
  --       do not need to be registered in any particular order.
  --       Both archived and online redo logs can be mined.  If a successful 
  --       call to the procedure is made a call to start_logmnr() must be 
  --       made before selecting from v$logmnr_contents.
  --
  --     CONSTANTS:
  --       dbms_logmnr.NEW:  Starts a new list. Any file registered prior to
  --         this call is discarded.
  --       dbms_logmnr.ADDFILE:  Adds the redo log to the existing
  --         list. LogMiner checks to make sure that the redo log is from
  --         the same database (DB_ID and RESETLOGS_SCN) incarnation as the
  --         ones previously added.
  --
  --     EXCEPTIONS:
  --       ORA: 1284   Redo log file specified can not be opened.
  --       ORA: 1285   Error reading the header of the redo log file
  --       ORA: 1286   Redo log file specified is from a database with a
  --                   different DB_ID
  --       ORA: 1287   Redo log file specified is from a database with
  --                   different incarnation
  --       ORA: 1289   Redo log file specified is a duplicate of a previously
  --                   specified redo log. LogMiner matches redo logs by the
  --                   log sequence number. Thus two redo logs with different
  --                   names but with the same log sequence# (for instance
  --                   the online counterpart of an archived redo log has 
  --                   a different name, but attempting to register it with
  --                   LogMiner after registering the archived counterpart 
  --                   will return this exception).
  --
  --  #2 dbms_logmnr.remove_logfile()
  --     DESCRIPTION:
  --       Unregisters a redo log file from LogMiner. Multiple redo logs can be
  --       unregistered by calling the procedure repeatedly. The redo logs 
  --       do not need to be unregistered in any particular order.  If a 
  --       successful call to the procedure is made a call to start_logmnr() 
  --       must be made before selecting from v$logmnr_contents.
  --
  --     EXCEPTIONS:
  --       ORA: 1290   Attempt was made to remove a redo log that has not been
  --                   registered with LogMiner
  --
  --  #3 dbms_logmnr.start_logmnr()
  --     DESCRIPTION:
  --       Loads the data dictionary used by LogMiner to translate internal
  --       schema object identifiers to names. The redo stream does not
  --       contain names of schema objects and columns. The data dictionary
  --       extract can be provided in three ways: 
  --         (i) use Oracle's online catalog. This is only valid when the 
  --         mining of redo logs is done in the same system that generated
  --         them.
  --         (ii) use data dictionary extract taken to a flat file. 
  --         (See description of dbms_logmnr_d.build())
  --         (iii) use data dictionary extracted in the redo stream. This
  --         option can ONLY be used when LogMiner is being run on an open
  --         database, and the source and the mining database instances are
  --         Oracle9i or higher.
  --       The user can also restrict the analysis inside an SCN range or a 
  --       time range. If both SCN range and time range are specified, only
  --       the SCN range is used. 
  --       The user needs to be mindful of the fact that use of time range
  --       can be imprecise.  If a start_time or start_scn is specified, it 
  --       must be contained in a redo log added by a previous call to 
  --       dbms_logmnr.add_logfile().  If a start_time and start_scn is not 
  --       specified, LogMiner will set it based on the earliest added redo 
  --       log.  If a end_time or end_scn is specified and it is beyond the 
  --       latest added redo log, LogMiner will overwrite the end_time and 
  --       and end_scn with information from the latest added redo log.  When 
  --       the CONTINOUS_MINE option is in use the semantics of 
  --       start and end time/scn ranges may be different.  
  --       See additional documentation below.
  --
  --       CONSTANTS (used in options parameter)
  --       dbms_logmnr.NO_DICT_RESET_ONSELECT:  (will be deprecated soon)
  --       dbms_logmnr.COMMITED_DATA_ONLY: Groups DMLs belonging to the
  --         same transaction. Transactions are shown in their commit order.
  --         Internal redo records (those related to index operations, space
  --         management etc) are filtered out. So are rolled back
  --         transactions, rollback to savepoints and in-flight transactions.
  --       dbms_logmnr.SKIP_CORRUPTION: Usually LogMiner returns an error
  --         on encountering corrupt redo records. With this option set
  --         LogMiner will skip the corrupted redo records and continue
  --         mining. LogMiner can not handle a redo log that has a corrupt
  --         header.
  --       dbms_logmnr.DDL_DICT_TRACKING: LogMiner will apply the DDL
  --         statements encountered in the redo stream to its internal
  --         dictionary. Only available with Oracle9i redo logs and later.
  --         Mining database needs to be open.
  --       dbms_logmnr.DICT_FROM_ONLINE_CATALOG: Use the online data
  --         dictionary for SQL reconstruction. Mining database must be the
  --         same one that generated the redo logs. User should expect to 
  --         see "Dictionary Version Mismatch" in SQL_REDO if the current
  --         object version is newer than the ones encountered in the redo 
  --         stream.
  --       dbms_logmnr.DICT_FROM_REDO_LOGS: Use the dictionary extract logged
  --         in the redo stream.
  --       dbms_logmnr.NO_SQL_DELIMITER: By default, the SQL_REDO and SQL_UNDO
  --         statements are delimited with a ';'. However, this is
  --         inconvenient for applications that want to open a cursor and
  --         execute the reconstructed statements. With this option set,
  --         the SQL_DELIMITER is not placed at the end of reconstructed 
  --         statements.
  --       dbms_logmnr.NO_ROWID_IN_STMT: By default, the SQL_REDO and SQL_UNDO
  --         statements for UPDATE and DELETE operations contain a 'ROWID = '
  --         in the where clause.  However, this is inconvenient for 
  --         applications that want to re-execute the SQL statement.  With 
  --         this option set, 'ROWID' is not placed at the end of reconstructed
  --         statements.  Note: The onus is on the user to ensure that 
  --         supplemental logging was enabled in the source database at the 
  --         appropriate level and that no duplicate rows exist in tables of 
  --         interest.  LogMiner Adhoc Query does NOT make any quarantee 
  --         regarding uniqueness of logical row identifiers.
  --       dbms_logmnr.PRINT_PRETTY_SQL: Useful for basic report for
  --         analysis. With large number of columns the reconstructed
  --         SQL statements become visually busy. With this option set
  --         LogMiner formats the reconstructed SQL statements for ease
  --         of reading. The reconstructed SQL statements look as follow:
  --            insert into "SCOTT"."EMP" values
  --              EMPNO: 101010,
  --              ENAME: "Valued Employee",
  --              SAL:   101010,
  --              DEPT:  NULL;
  --             update "SCOTT"."EMP" 
  --              set
  --              "EMPNO" = 101011 and
  --              "SAL"   = 101011
  --              where
  --              "EMPNO" = 101010 and
  --              "SAL"   = 101010 and
  --              ROWID   = AABBCEXFGHA;
  --       dbms_logmnr.CONTINUOUS_MINE: Need to mine in the same instance
  --         that is generating the redo logs. The user needs to register
  --         only one archived log file. LogMiner will automatically add
  --         and mine subsequent archived redo logs, and eventually 
  --         mine online logfiles.
  --       dbms_logmnr.STRING_LITERALS_IN_STMT: By default, the SQL_REDO and 
  --         SQL_UNDO statements honor a database session's NLS setting 
  --         (e.g. NLS_DATE_FORMAT, NLS_NUMERIC_CHARACTERS, etc.) when 
  --         formating the reconstructed SQL statements.  With this option, 
  --         the reconstructed SQL statements will be formatted using ANSI/ISO 
  --         string literal formats.
  --
  --    EXCEPTIONS:
  --      ORA: 1281     startScn or endSCN parameter specified is not a valid 
  --                    SCN or endScn is greater then startScn
  --      ORA: 1282     startTime parameter is greater than year 2110 or 
  --                    endTime parameter is greater than year 2110 or 
  --                    startTime parameter is less then year 1988
  --      ORA: 1283     The value specified in the Options parameter is not a 
  --                    NUMBER or is not a known LogMiner Adhoc option
  --      ORA: 1284     The dictionary file specified in the DictFileName 
  --                    parameter has a full path length greater then 256 or 
  --                    cannot be opened
  --      ORA: 1285     DictFileName parameter is not a valid VARCHAR2
  --      ORA: 1291     Redo files are missing which are needed to satisfy 
  --                    the user's requested SCN/time range.
  --                    The user can specify ALLOW_MISSING_LOGS option.
  --                    Missing logs are not allowed under any circumstance 
  --                    when DDL tracking is in use
  --      ORA: 1292     No log file has been registered with LogMiner
  --      ORA: 1293     Mounted database required for options specified 
  --                    (CONTINIOUS_MINE)
  --      ORA: 1294     Error while processing the data dictionary extract
  --      ORA: 1295     DB_ID of the data dictionary does not match that of
  --                    the redo logs
  --      ORA: 1296     Character set specified in the data dictionary does
  --                    not match (and is incompatible with) that of the 
  --                    mining database
  --      ORA: 1297     Redo version mismatch between the dictionary and
  --                    the registered redo logs
  --      ORA: 1298     More than one dictionary source was specified or 
  --                    DDL_DICT_TRACKING was requested with 
  --                    DICT_FROM_ONLINE_CATALOG
  --      ORA: 1299     Dictionary is from a different database incarnation
  --      ORA: 1300     Writable database required for options specified 
  --                    (DDL_DICT_TRACKING, DICT_FROM_REDO_LOGS, 
  --                     DICT_FROM_ONLINE_CATALOG)
  --      ORA: 1371     A logfile containing the dictionary dump to redo logs 
  --                    is missing
  --      ORA: 1286     Options specified require start time or start SCN
  --                 
  --  #4 dbms_logmnr.end_logmnr()
  --     DESCRIPTION:
  --       Ends the LogMiner session. Releases all PGA memory allocated 
  --       to stage internal data structures etc.
  --
  --     EXCEPTIONS:
  --       ORA: 1307    No LogMiner session is currently active. 
  --                    Attempt to end_logmnr() without calling
  --                    add_logfile() or start_logmnr()
  --
  --  #5 dbms_logmnr.mine_value()
  --     DESCRIPTION: 
  --       This facilitates query by data value. For instance, the user
  --       can formulate a query that says "Show me all updates to
  --       SCOTT.EMP where the SAL column is updated to twice its
  --       original value"
  --       select sql_redo from v$logmnr_contents where
  --           operation = 'UPDATE" and 
  --           owner_name = 'SCOTT' and seg_name = 'EMP' and
  --         dbms_logmnr.mine_value(redo_value, 'SCOTT.EMP.SAL') > 
  --         2* dbms_logmnr.mine_value(undo_value, 'SCOTT.EMP.SAL');
  --      The function returns NULL if the column does not exist in
  --      the redo record or if the column value is actually null.
  --      To decipher between the two different null possibilities
  --      use dbms_logmnr.column_present() function.
  --      
  --      PARAMETERS:
  --        sql_redo_undo:  which column in v$logmnr_contents to 
  --        extract data value from
  --        column_name:    fully qualified column name of the 
  --        column that needs to be extracted
  --
  --      EXCEPTIONS:
  --      ORA 1302:     Specified table or column does not exist
  --       
  --  #6 dbms_logmnr.column_present()
  --     DESCRIPTION: 
  --       Can be used to decipher null returns from mine_value function
  --       The query described above can be rewritten to filter out
  --       redo records that do not contain update to the 'SAL' 
  --       columns
  --         select sql_redo from v$logmnr_contents where
  --           operation = 'UPDATE" 
  --           owner_name = 'SCOTT' and seg_name = 'EMP' and
  --           dbms_logmnr.mine_value(redo_value, 'SCOTT.EMP.SAL') > 
  --           2* dbms_logmnr.mine_value(undo_value, 'SCOTT.EMP.SAL') and
  --           dbms_logmnr.column_present(redo_value, 'SCOTT.EMP.SAL');
  -- 
  --      PARAMETERS:
  --        sql_redo_undo:  which column in v$logmnr_contents to 
  --        extract data value from
  --        column_name:    fully qualified column name of the 
  --        column that needs to be extracted
  --
  --      EXCEPTIONS:
  --      ORA 1302:     Specified table or column does not exist
  --       
  ---------------------------------
  
-----------------------------------
-- SUBTYPES and related CONSTANTS
--
 
--
-- Constants for add_archivelog options flag


NEW                       CONSTANT BINARY_INTEGER := 1;
REMOVEFILE                CONSTANT BINARY_INTEGER := 2;
ADDFILE                   CONSTANT BINARY_INTEGER := 3;


--
-- Constants for start_logmnr options flag
NO_DICT_RESET_ONSELECT    CONSTANT BINARY_INTEGER := 1;
COMMITTED_DATA_ONLY       CONSTANT BINARY_INTEGER := 2;
SKIP_CORRUPTION           CONSTANT BINARY_INTEGER := 4;
DDL_DICT_TRACKING         CONSTANT BINARY_INTEGER := 8;
DICT_FROM_ONLINE_CATALOG  CONSTANT BINARY_INTEGER := 16;
DICT_FROM_REDO_LOGS       CONSTANT BINARY_INTEGER := 32;
NO_SQL_DELIMITER          CONSTANT BINARY_INTEGER := 64;
PRINT_PRETTY_SQL          CONSTANT BINARY_INTEGER := 512;
CONTINUOUS_MINE           CONSTANT BINARY_INTEGER := 1024;
NO_ROWID_IN_STMT          CONSTANT BINARY_INTEGER := 2048;
STRING_LITERALS_IN_STMT   CONSTANT BINARY_INTEGER := 4096;
--
SUBTYPE Length            IS BINARY_INTEGER;
SUBTYPE ThreadId          IS BINARY_INTEGER;


--
-- Constants for STATUS column of v$logmnr_contents
-- NOTE: Make sure that new ones match the values defined
-- in the krvfsri struct in krv0.h
VALID_SQL                 CONSTANT BINARY_INTEGER := 0;
INVALID_SQL               CONSTANT BINARY_INTEGER := 2;
UNGUARANTEED_SQL          CONSTANT BINARY_INTEGER := 3;
CORRUPTED_BLK_IN_REDO     CONSTANT BINARY_INTEGER := 4;
ASSEMBLY_REQUIRED_SQL     CONSTANT BINARY_INTEGER := 5;
HOLE_IN_LOGSTREAM         CONSTANT BINARY_INTEGER := 1291;


-- Workaround for the lack of constrained subtypes


LogFileNameTemplate          VARCHAR2(256);  
SUBTYPE LogFileName          IS LogFileNameTemplate%TYPE;
LogFileDescTemplate          VARCHAR2(256);  
SUBTYPE LogFileDescription   IS LogFileDescTemplate%TYPE;




-------------
-- PROCEDURES 
--


---------------------------------------------------------------------------
---------------------------------------------------------------------------
-- Initialize LOGMINER
--
-- Supplies LOGMINER with the list of filenames and SCNs required
-- to initialize the tool.  Once this procedure completes, the server is ready
-- to process selects against the v$logmnr_contents fixed view.
--
---------------------------------------------------------------------------


PROCEDURE start_logmnr( 
     startScn           IN  NUMBER default 0 ,
     endScn             IN  NUMBER default 0,
     startTime          IN  DATE default '',
     endTime            IN  DATE default '',
     DictFileName       IN  VARCHAR2 default '',
     Options            IN  BINARY_INTEGER default 0 );


PROCEDURE add_logfile( 
     LogFileName        IN  VARCHAR2,
     Options            IN  BINARY_INTEGER default ADDFILE );


PROCEDURE end_logmnr;


FUNCTION column_present(
     sql_redo_undo      IN  NUMBER default 0,
     column_name        IN  VARCHAR2 default '') RETURN BINARY_INTEGER;


FUNCTION mine_value(
     sql_redo_undo      IN  NUMBER default 0,
     column_name        IN  VARCHAR2 default '') RETURN VARCHAR2;


PROCEDURE remove_logfile( 
     LogFileName        IN  VARCHAR2);


---------------------------------------------------------------------------


pragma TIMESTAMP('1998-05-05:11:25:00');


END;
/
grant execute on dbms_logmnr to execute_catalog_role;
create or replace public synonym dbms_logmnr for sys.dbms_logmnr;

其实内容都不多,主要是注释,作用就是创建存储过程,并授予权限。

执行过程,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> @?/rdbms/admin/dbmslmd.sql


Package created.


Synonym created.


SQL> @?/rdbms/admin/dbmslm.sql


Package created.


Grant succeeded.


Synonym created.

我们为LogMiner创建一个专门的表空间,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> create tablespace lm_tbs datafile '/opt/applog/oracle/oradata/DCM/lm_tbs01.dbf' size 500M autoextend on maxsize 2G;


Tablespace created.


SQL> exec dbms_logmnr_d.set_tablespace(new_tablespace=>'lm_tbs');


PL/SQL procedure successfully completed.

创建下LogMiner的目录,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> create directory utlfile as '/opt/applog/oracle/oradata/DCM';


Directory created.


SQL> show parameter utl_


NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
create_stored_outlines               string
utl_file_dir                         string

需要设定UTL_FILE_DIR参数值,指定数据字典文件的位置,

P.S. 此参数在12c以上已经被废弃了。

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> alter system set utl_file_dir='/opt/applog/oracle/oradata/DCM/' scope=spfile;


System altered.

UTL_FILE_DIR是静态参数,需要重启,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> show parameter utl_


NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
create_stored_outlines               string
utl_file_dir                         string      /opt/applog/oracle/oradata/DCM

建议开启LogMiner日志补充模式,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> alter database add supplemental log data;


Database altered.

创建一个LogMiner账号,授予权限,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> create user logminer identified by logminer;


User created.


SQL> grant connect, resource, dba to logminer;


Grant succeeded.

登陆普通用户,删除一张表,再创建同名表,增加一些数据,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[oracle@vm-osvm77512-app admin]$ sqlplus bisal/bisal


SQL*Plus: Release 11.2.0.4.0 Production on Tue Dec 1 16:21:05 2020


Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


SQL> desc bisal
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER
 NAME                                               VARCHAR2(1)


SQL> drop table bisal;


Table dropped.


SQL> create table bisal(id number primary key, a varchar2(1), b number, c date);


Table created.


SQL> insert into bisal values(1, 'a', 1, sysdate-1);


1 row created.


SQL> insert into bisal values(2, 'b', 2, sysdate);


1 row created.


SQL> insert into bisal values(3, 'c', 3, sysdate);


1 row created.


SQL> commit;


Commit complete.


SQL> select * from bisal;


        ID A          B C
---------- - ---------- ------------------
         1 a          1 30-NOV-20
         2 b          2 01-DEC-20
         3 c          3 01-DEC-20

现在的需求是,想得到刚才删除的原始表,该怎么操作?

因为上述操作,包含了DDL,数据库对象发生变化,因此,需要重新创建数据字典文件,使用LogMiner分析redo log文件/archive log文件之前,建议使用dbms_logmnr_d包将数据字典导出为一个文本文件,该字典文件是可选的,但如果没有他,LogMiner解释出来的语句中关于数据字典中的部分(如表名、列名等)和数值都将是16进制的形式,无法直接理解,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
[oracle@vm-osvm77512-app admin]$ sqlplus logminer/logminer


SQL*Plus: Release 11.2.0.4.0 Production on Tue Dec 1 16:56:59 2020


Copyright (c) 1982, 2013, Oracle.  All rights reserved.




Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


SQL> exec dbms_logmnr_d.build(dictionary_filename => 'dictionary.ora', dictionary_location => '/opt/applog/oracle/oradata/DCM', options => dbms_logmnr_d.store_in_flat_file);


PL/SQL procedure successfully completed.

dbms_logmnr_d包将数据字典导出为一个文本文件的原因:

1. 如要分析的数据库表有变化,库的数据字典也发生变化,需重新创建该字典文件。

2. 在新库中分析另外一个数据库的重作日志时,也须生成一个原库的数据字典文件。

确认当前系统使用的在线日志,可以看到,redo01.log处于ACTIVE,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> select group#, sequence#, status, first_change#, first_time from v$log order by first_change#;


    GROUP#  SEQUENCE# STATUS           FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------------- ------------- ------------------
         2       2915 INACTIVE             396421406 27-NOV-20
         3       2916 INACTIVE             396547945 29-NOV-20
         1       2917 CURRENT              396658312 01-DEC-20


SQL> select sequence#, FIRST_CHANGE#, NEXT_CHANGE#,name from v$archived_log order by sequence# desc;


no rows selected

使用dbms_logmnr.add_logfile加入分析日志的文件,第一个文件使用的参数是dbms_logmnr.new,再增加就是用dbms_logmnr.addfile,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> begin
  2  dbms_logmnr.add_logfile(logfilename=>'/opt/applog/oracle/oradata/DCM/redo01.log', options=>dbms_logmnr.new);  
  3  end;
  4  /


PL/SQL procedure successfully completed.

再增加就是用dbms_logmnr.addfile,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> begin
  2  dbms_logmnr.add_logfile(logfilename=>'/opt/applog/oracle/oradata/DCM/redo02.log', options=>dbms_logmnr.addfile);
  3  end;
  4  /


PL/SQL procedure successfully completed.

开始日志分析,可分为无限制条件和限制条件,无限制条件中分析所有加入到分析列表日志文件,限制条件根据限制条件分析指定范围日志文件,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> exec dbms_logmnr.start_logmnr(dictfilename=>'/opt/applog/oracle/oradata/DCM/dictionary.ora');


PL/SQL procedure successfully completed.

有条件限制,可以设置不同参数的值,缩小要分析日志文件的范围,例如通过设置起止时间参数来限制分析某一时间范围的日志内容,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
START_LOGMNR Procedure
This procedure starts LogMiner by loading the dictionary that LogMiner 
will use to translate internal schema object identifiers to names.


Syntax
DBMS_LOGMNR.START_LOGMNR (
   startScn           IN NUMBER default 0,
   endScn             IN NUMBER default 0,
   startTime          IN DATE default '01-jan-1988',
   endTime            IN DATE default '31-dec-2110',
   DictFileName       IN VARCHAR2 default '',
   Options            IN BINARY_INTEGER default 0 );

例如分析2021年6月5日全天的日志,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
EXECUTE dbms_logmnr.start_logmnr(
DictFileName => dictfilename=>'/opt/applog/oracle/oradata/DCM/dictionary.ora', 
StartTime =>to_date('2021-6-5 00:00:00','YYYY-MM-DD HH24:MI:SS')
EndTime =>to_date(''2021-6-5 23:59:59','YYYY-MM-DD HH24:MI:SS ')); 

分析完成,就可以从动态性能视图v$logmnr_contents,检索到LogMiner分析得到的所有的信息,根据用户名、表名,找到误删除的表,执行相关的回滚语句,即可得到原始表,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> select sql_redo, sql_undo from v$logmnr_contents where username='BISAL' and table_name='BISAL';


SQL_REDO
--------------------------------------------------------------------------------
SQL_UNDO
--------------------------------------------------------------------------------
ALTER TABLE "BISAL"."BISAL" RENAME TO "BIN$tWPQZiEzW2ngVfgWPgtqEw==$0" ;
drop table bisal AS "BIN$tWPQZiEzW2ngVfgWPgtqEw==$0" ;
create table bisal(id number primary key, a varchar2(1), b number, c date);


SQL_REDO
--------------------------------------------------------------------------------
SQL_UNDO
--------------------------------------------------------------------------------
insert into "BISAL"."BISAL"("ID","A","B","C") values ('1','a','1',TO_DATE('30-NO
V-20', 'DD-MON-RR'));
delete from "BISAL"."BISAL" where "ID" = '1' and "A" = 'a' and "B" = '1' and "C"
 = TO_DATE('30-NOV-20', 'DD-MON-RR') and ROWID = 'AAAHViAAJAAA6ZzAAA';


insert into "BISAL"."BISAL"("ID","A","B","C") values ('2','b','2',TO_DATE('01-DE
C-20', 'DD-MON-RR'));
delete from "BISAL"."BISAL" where "ID" = '2' and "A" = 'b' and "B" = '2' and "C"
 = TO_DATE('01-DEC-20', 'DD-MON-RR') and ROWID = 'AAAHViAAJAAA6ZzAAB';


SQL_REDO
--------------------------------------------------------------------------------
SQL_UNDO
--------------------------------------------------------------------------------
insert into "BISAL"."BISAL"("ID","A","B","C") values ('3','c','3',TO_DATE('01-DE
C-20', 'DD-MON-RR'));
delete from "BISAL"."BISAL" where "ID" = '3' and "A" = 'c' and "B" = '3' and "C"
 = TO_DATE('01-DEC-20', 'DD-MON-RR') and ROWID = 'AAAHViAAJAAA6ZzAAC';


6 rows selected.

SQL_REDO是可以为重做记录重做指定行变化的SQL语句(正向操作)。

SQL_UNDO是可以为重做记录回退或恢复指定行变化的SQL语句(反向操作)。

但是要注意,视图v$logmnr_contents中的分析结果仅在我们运行过dbms_logmrn.start_logmnr的会话中,其他进程不能看,这是因为所有的LogMiner存储都在PGA内存中,如果想让其他会话可见,或者想一直存着这些数据,可以用CTAS创建表,存储这些数据。

P.S. 从上面的第一条,能看到Oracle是如何删除一张表的,执行的drop,先是对其改名,然后执行drop,所以我们才可以从回收站恢复该表。

结束分析工作,执行这个dbms_logmnr.end_logmnr,PGA区域会被清空了,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> exec dbms_logmnr.end_logmnr;


PL/SQL procedure successfully completed.

再次检索视图,就会提示错误,说明数据已清空,

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
SQL> select count(*) from v$logmnr_contents;
select count(*) from v$logmnr_contents
                     *
ERROR at line 1:
ORA-01306: dbms_logmnr.start_logmnr() must be invoked before selecting from
v$logmnr_contents

整个操作虽然步骤有些多,但相对来说,还是比较规矩的,可以作为日常误操作的一种应急手段,或者通过日志的数据分析,支持更多的功能。

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2021/06/10 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
暂无评论
推荐阅读
让我们挖吧——Logminer(3)Lo
终于到了真正的操作篇,不过Logminer的操作真的是很简单,关键还是当中的一些原理和如何合理应用Logminer。
py3study
2020/01/14
8080
【DB笔试面试794】在Oracle中,LogMiner是什么?
在Oracle中,LogMiner是什么?其有哪些用途?请简述LogMiner的使用过程。
AiDBA宝典
2020/05/11
1K0
分析Oracle数据库日志文件(1)
分析Oracle数据库日志文件(1) 一、如何分析即LogMiner解释 从目前来看,分析Oracle日志的唯一方法就是使用Oracle公司提供的LogMiner来进行, Oracle数据库的所有更改都记录在日志中,但是原始的日志信息我们根本无法看懂,而LogMiner就是让我们看懂日志信息的工具。从这一点上看,它和tkprof差不多,一个是用来分析日志信息,一个则是格式化跟踪文件。通过对日志的分析我们可以实现下面的目的: 1、查明数据库的逻辑更改; 2、侦察并更正用户的误操作; 3、执行事后审计;
阿新
2018/04/12
3.2K0
Log Miner 挖挖挖
Log Miner是Oracle自Oracle 8i以后推出的一个可以分析数据库redo log和archivelog内容的工具,可以通过日志分析所有对数据库的DDL和DML操作,也可以分析出操作的时间与操作时的SCN和进行操作的机器,对于DML操作还可以查询出还原操作的sql。
沃趣科技
2018/10/31
8200
Log Miner 挖挖挖
基于Oracle LogMiner的实时采集一、Logminer简介及使用
Oracle LogMiner 是 Oracle 公司从产品 8i 以后提供的一个实际非常有用的分析工具,使用该工具可以轻松获得 Oracle 在线/归档日志文件中的具体内容,特别是该工具可以分析出所有对于数据库操作的 DML 和 DDL 语句。该工具特别适用于调试、审计或者回退某个特定的事务。LogMiner 分析工具实际上是由一组 PL/SQL 包和一些动态视图( Oracle8i 内置包的一部分)组成,它作为 Oracle 数据库的一部分来发布是 8i 产品提供的一个完全免费的工具。但该工具和其他 Oracle 内建工具相比使用起来显得有些复杂,主要原因是该工具没有提供任何的图形用户界面( GUI)。
@阿诚
2021/07/23
3.7K0
如何使用StreamSets实现Oracle中变化数据实时写入Kudu
温馨提示:如果使用电脑查看图片不清晰,可以使用手机打开文章单击文中的图片放大查看高清原图。 Fayson的github: https://github.com/fayson/cdhproject 提示:代码块部分可以左右滑动查看噢 1.文档编写目的 ---- 在前面的文章Fayson介绍了关于StreamSets的一些文章,参考《如何在CDH中安装和使用StreamSets》、《如何使用StreamSets从MySQL增量更新数据到Hive》、《如何使用StreamSets实现MySQL中变化数据实时写入K
Fayson
2018/07/12
5.2K0
基于 Oracle LogMiner 的 CDC 日志解析
Oracle 的 Change Data Capture (CDC) 机制利用 LogMiner 解析重做日志获取数据变更。
Yunjie Ge
2025/04/27
1760
基于 Oracle LogMiner 的 CDC 日志解析
Oracle中删除的列数据可以进行恢复么?
再次闪回表,当前提示的是"无法读取数据 - 表定义已更改",说明如果表出现了定义改变,例如增列、删列等,就无法通过闪回进行表的恢复,原因就是闪回操作利用的是UNDO,但是drop column是个DDL操作,直接提交,不会记录到UNDO中,因此不能通过flashback table操作将表恢复到列删除前的状态,
bisal
2025/04/22
1830
Oracle中删除的列数据可以进行恢复么?
这样做,免费从Oracle同步数据
刘伟,云和恩墨软件开发部研究院研究员;前微博DBA,主要研究方向为开源数据库,分布式数据库,擅长自动化运维以及数据库内核研究。
数据和云
2019/05/13
2.1K0
这样做,免费从Oracle同步数据
小程聊微服务-数据抽取那点事(一)
我们在《微服务是在双刃剑 http://www.jianshu.com/p/82ec12651d2d 》中提到了当我们将应用服务化以后,很多在单块系统中能够开展的数据统计和分析业务将会受到很大程度的影响,本文将延续上一篇文章深入分析服务化后,作为后端的数据统计和分析如何做。
小程故事多
2018/08/22
8110
小程聊微服务-数据抽取那点事(一)
【Oracle】还想着花钱实现Oracle数据同步迁移?一招教你搞定!!
作者个人研发的在高并发场景下,提供的简单、稳定、可扩展的延迟消息队列框架,具有精准的定时任务和延迟队列处理功能。自开源半年多以来,已成功为十几家中小型企业提供了精准定时调度方案,经受住了生产环境的考验。为使更多童鞋受益,现给出开源框架地址:
冰河
2020/10/29
9740
LogMiner的使用
这里当前日志(current)是: thread 1 sequence 30 thread 2 sequence 25
Alfred Zhao
2019/05/24
7580
Oracle CDC详细配置(LogMiner)
在很多的数据处理任务中,我们经常会用到数据同步的需求,尤其是异构数据库同步,当然目前世面上有很多的数据同步工具,开源和商业的都有很多,当然Oracle自身也可以实现数据的实时或者异步同步。这里我们示例一下通过配置 Oracle 提供的 LogMiner 免费工具,进行数据变更的捕获。配置完成后,动手能力强的小伙伴,就可以自己写个代码,解析相应的变更,然后可以做成同构或者异构数据库的同步工具。
Yunjie Ge
2022/04/24
2.8K0
Oracle数据误操作全面恢复实战(r11笔记第78天)
对于DBA来说,面对误操作带来的数据恢复难度,其实很大。主要有以下几个方面: 误操作的影响范围极大,很可能不是删点,改点数据的操作,有时候可能是让人望而兴叹的truncate,drop操作。 数据恢复时需要确认数据损坏的时间点,依此来作为数据恢复的一个基准,该舍弃多少数据,该如何权衡,非常关键。 一旦信息提供错误,是否经得起反复折腾,我想这个对于绝大多数的数据恢复而言,基本都是一锤子买卖,能恢复已经不错了,还要反复恢复。但是一旦出现这种情况,可不能马上乱了阵脚。 灾备方案好不好,一试便知 自己也
jeanron100
2018/03/21
7860
Oracle数据误操作全面恢复实战(r11笔记第78天)
数据传输 | 如何使用 DTLE 将 Oracle 数据同步到 MySQL
爱可生测试团队成员,主要负责 DTLE 开源项目相关测试任务,擅长 Python 自动化测试开发。
爱可生开源社区
2022/04/06
1.4K0
oracle 常用命令大汇总
oracle 常用命令大汇总(第一篇)     第一章:日志管理     1.forcing log switches     sql> alter system switch logfile;     2.forcing checkpoints     sql> alter system checkpoint;     3.adding online redo log groups     sql> alter database add logfile [group 4]     sq
阿新
2018/04/12
7100
Oracle闪回原理-Logminer解读redo(r11笔记第17天)
说到闪回日志,我们都知道闪回日志中记录的都是逆操作,那么就有两个问题需要解释了。 闪回日志和回滚段保存的数据有什么差别? 如果做了truncate操作,闪回日志是怎么记录的,怎么能够通过闪回恢复数据。 第一个问题是一个同学问的,第二个问题是我偶然想起来的,当然这两个问题还是蛮有意思。我们的目标就是解释清楚下面的两个问题。 当然要深刻理解这个问题,一个重要的部分就是得先明白redo的基本情况。 借用大师Jonathan Lewis的话说,Oracle里面最重要的特性是在V6提出的改变向量,改变向量是
jeanron100
2018/03/21
1.2K0
Oracle闪回原理-Logminer解读redo(r11笔记第17天)
初识FlinkX-Oracle Logminer模块
本文作者:杨槐(花名:渡劫)袋鼠云大数据开发工程师,负责FLinkx的开发与维护。
袋鼠云数栈
2021/05/14
1.2K0
初识FlinkX-Oracle Logminer模块
Oracle LogMiner 核心 SQL 解析
在Oracle数据库变更捕获(CDC)体系中,LogMiner是极其重要的工具,它通过解析重做日志,捕捉DML、DDL变更并还原成结构化事件。LogMiner在工作过程中大量依赖底层系统视图(如VLOGMNR_CONTENTS、SYSTEM.LOGMNR系列表)以及一系列精心设计的SQL查询。本文将带你逐条深入了解LogMiner背后的关键SQL,揭开高效日志解析的秘密。
Yunjie Ge
2025/04/27
1070
Oracle LogMiner 核心 SQL 解析
Oracle 常用命令大汇总
第一章:日志管理     1.forcing log switches     sql> alter system switch logfile;     2.forcing checkpoints     sql> alter system checkpoint;     3.adding online redo log groups     sql> alter database add logfile [group 4]     sql> ('/disk3/log4a.rdo','/di
阿新
2018/04/09
9390
相关推荐
让我们挖吧——Logminer(3)Lo
更多 >
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档