http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1992907061984Here is what I would do (so we can maximize shared sql reuse -- a certain habit of mine).
create table dump_csv_parameters
( job_id int primary key,
query clob,
separator varchar2(5),
dir varchar2(255),
filename varchar2(255),
last_time date,
last_row_count number
);
we will use this table to store the parameters for a dump_csv call. Also, for ease, we'll stuff the last time run and the last count of rows in here as well -- so we can easily verify row counts and times and such...Next, we'll create a small interface routine to read the parameters for a given job and run dump_csv:create or replace procedure run_dump_csv( p_job_id in number )
as
l_cnt number;
begin
for x in ( select * from dump_csv_parameters where job_id = p_job_id )
loop
l_cnt := dump_csv(dbms_lob.substr(x.query,32000,1), x.separator, x.dir, x.filename);
update dump_csv_parameters
set last_time = sysdate, last_row_count = l_cnt
where job_id = p_job_id;
end loop;
end;
/
And now we can actually schedule the job itself. Here we schedule every night at midnight. Note how we pass the VARIABLE job -- not a VALUE for a job -- into the procedure (if you are interested in how this all works -- I have a chapter on it in my book showing how the job queues actually do their stuff). When the job is run, it'll be passed its unique job ID. If we schedual all jobs in our system this way -- we'll have exactly one query in the shared pool -- "begin run_dump_csv( JOB ); end;" -- instead ofbegin dump_csv( 'select * from t1', '|', '/tmp/', 't1.dat' ); end;begin dump_csv( 'select * from t2', '|', '/tmp/', 't2.dat' ); end;....declare
l_job number;
begin
dbms_job.submit( l_job,
'run_dump_csv( JOB );',
trunc(sysdate)+1,
'trunc(sysdate)+1' );
insert into dump_csv_parameters
(job_id, query,separator,dir,filename)
values
( l_job, 'select * from emp', '|', '/tmp/', 'emp.dat' );
commit;
end;
/
and when this job runs, it'll create the file and update the table for us -- every night at midnight....