Friday, February 12, 2016

impdp hangs

I had to restore a 20GB dump on AWS, and impdp was hanging at the index step for almost 3 hours

and i couldn't verify what it was doing.



Session browser said its waiting with "wait for unread message on broadcast channel"



dba_datapump_jobs status said "Executing"



 dba_objects was good too



So, to get the status of the import process, I had to query v$session_longops. Note the "SOFAR", "TOTALWORKS"  and "Messages" column. For my import session they kept changing giving the impression that its not frozen




Here are the queries...

-- to get the list of import/export running
select * from dba_datapump_jobs;

-- to check the status
select j.owner_name,j.job_name,j.state,o.object_type,o.status, o.timestamp
from dba_datapump_jobs j, dba_objects o
where j.owner_name = o.owner
and j.job_name = o.object_name;

-- to check what its doing
select opname, target, sofar, totalwork,units, elapsed_seconds, message
from v$session_longops
where sofar != totalwork
order by start_time desc;

Thursday, February 11, 2016

ORA-27046: file size is not a multiple of logical block size

I had to install a database on AWS, and while transferring a dump around 20GB using dbms_file_transfer, it gave me the following error

BEGIN
  DBMS_FILE_TRANSFER.PUT_FILE(
    source_directory_object         =>   'DATA_PUMP_XXX',
    source_file_name                   =>    'xxyyzzz.DMP',
    destination_directory_object  =>    'DATA_PUMP_DIR',
    destination_file_name            =>    'xxyyzzz.DMP',
    destination_database              =>     'target_dblink'
  );
END;
/

ORA-19505: failed to identify file "D:\Clients\XXX\DataPump\xxyyzzz.DMP"
ORA-27046: file size is not a multiple of logical block size
OSD-04012: file size mismatch (OS 1829415400)
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 60
ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 168
ORA-06512: at line 2
19505. 00000 -  "failed to identify file \"%s\""

Google search pointed to file size not being multiple of block size, but in real world i have no control on the dump file size.

Source file was password protected with a huge string, and transferred to AWS jump box via ftp. File was not corrupted, as I can unzip the file and read the contents of the log file and such.

For the second attempt, I zipped the file with a relatively smaller password string, and transferred to the jump box via ftp, and interestingly enough the same command (above) worked.

Not sure what went wrong the first time, So if you get this error, its not due to the size, but file being messed during transfer.

1) Ttry to transfer the file again
2) If its zipped use different tool/format