Hi All,
1) in spark created a carbondata table
create table temp(col1 string, col2 string) STORED BY 'org.apache.carbondata.format';
2) insert into table once
insert into temp values ('test', 'test'), ('test2', 'test2');
3) drop table temp;
in this is above steps which flow will create below lock files
clean_files.lock
compaction.lock
droptable.lock
meta.lock
update.lock
when I repeat above steps rarely some files are not deleted, in hdfs after drop table ;
<size 0 bytes> clean_files.lock
<size 0 bytes> compaction.lock
<size 0 bytes> droptable.lock
<size 0 bytes> meta.lock
<size 0 bytes> update.lock
metadata
<size 190 bytes> metadata/schema
thanks,
sandeep