Mega Search
23.2 Million


Sign Up

Make a donation  
basic compression  
News Group: comp.databases.oracle.server

initally a table was created with basic compression and loaded using 
direct load insert. during table usage time there were performed some 
updates, some non-direct load inserts, deletions etc. so after a while a 
table is composed of compressed and uncompressed blocks living altogether.

do you know if is it possible to tell whether a particular block is 
stored in compressed or uncompressed form when using basic compression 
for a table? is this information available at the block level with the 
help of some oracle system functions that operate on blocks?

thank you
geos

Vote for best question.
Score: 0  # Vote:  0
Date Posted: 15-Jan-2015, at 9:58 PM EST
From: geos
 
Re: basic compression  
News Group: comp.databases.oracle.server
On Thu, 22 Jan 2015 06:21:58 -0800, ddf wrote:

> On Friday, January 16, 2015 at 12:01:10 PM UTC-7, Mark D Powell wrote:
>> On Thursday, January 15, 2015 at 3:58:56 PM UTC-5, geos wrote:
>> > initally a table was created with basic compression and loaded using
>> > direct load insert. during table usage time there were performed some
>> > updates, some non-direct load inserts, deletions etc. so after a
>> > while a table is composed of compressed and uncompressed blocks
>> > living altogether.
>> > 
>> > do you know if is it possible to tell whether a particular block is
>> > stored in compressed or uncompressed form when using basic
>> > compression for a table? is this information available at the block
>> > level with the help of some oracle system functions that operate on
>> > blocks?
>> > 
>> > thank you geos
>> 
>> I do not have time to read Joel's reference at the moment so my
>> apologies if this is in the material but I suspect you can see this in
>> the block dump of the table/index.  That is I expect there is flag set
>> that will show in the dump but I do not have time to test this either
>> but it should only take you a few moments to find out.
>> 
>> HTH -- Mark D Powell --
> 
> A block dump will show compressed data by virtue of the row length;
> compressed data will show 'abnormally' short lengths (as compared to
> what the length SHOULD be based on the column definitions plus
> overhead).  Actually the data isn't compressed, it's 'de-duplicated'. 
> It's an interesting mechanism, described by Jonathan in the provided
> link.  Suffice it to say the repeating values are 'cataloged' into a
> 'table' and each occurrence of a given token is replaced by its
> 'identifier' as referenced by the token 'table'.  You need to read
> Jonathan's post in its entirety; do not rely on my synopsis.  Jonathan
> also takes you through the entire process of generating and reading a
> binary block dump so be sure to read and understand that as well.
> 
> 
> David Fitzjarrell

That mechanism is available for DB2 as of version 9, which is a long 
time. Oracle de-duplication technology is very similar.



-- 
Mladen Gogala
The Oracle Whisperer
http://mgogala.byethost5.com
Je suis Charlie

Vote for best answer.
Score: 0  # Vote:  0
Date Posted: 23-Jan-2015, at 1:28 AM EST
From: Mladen Gogala
 
Re: basic compression  
News Group: comp.databases.oracle.server
On Thursday, January 22, 2015 at 1:55:42 PM UTC-8, geos wrote:
> On 22.01.2015 15:21, ddf wrote:
> >> I do not have time to read Joel's reference at the moment so my apologies if this is in the material but I suspect you can see this in the block dump of the table/index.  That is I expect there is flag set that will show in the dump but I do not have time to test this either but it should only take you a few moments to find out.
> >> HTH -- Mark D Powell --
> >
> > A block dump will show compressed data by virtue of the row length; compressed data will show 'abnormally' short lengths (as compared to what the length SHOULD be based on the column definitions plus overhead).  Actually the data isn't compressed, it's 'de-duplicated'.  It's an interesting mechanism, described by Jonathan in the provided link.  Suffice it to say the repeating values are 'cataloged' into a 'table' and each occurrence of a given token is replaced by its 'identifier' as referenced by the token 'table'.  You need to read Jonathan's post in its entirety; do not rely on my synopsis.  Jonathan also takes you through the entire process of generating and reading a binary block dump so be sure to read and understand that as well.
> > David Fitzjarrell
> 
> thank you all for pointing me to these articles. I started reading them 
> and even I won't be allowed to do block dump I see there is a lot of 
> useful information. I thought that maybe there was a way to tell 
> compressed/uncompressed by executing some procedure but I also 
> appreciate learning something new.
> 
> thank you,
> geos

I think a compression-aware option to the VSIZE function would be a reasonable enhancement to ask for.  Not that they would do it.

jg
-- 
@home.com is bogus.
http://www.crn.com/news/data-center/300075457/oracles-ellison-we-have-a-new-weapon-in-our-arsenal-and-its-price.htm

Vote for best answer.
Score: 0  # Vote:  0
Date Posted: 22-Jan-2015, at 4:50 PM EST
From: Joel Garry
 
Re: basic compression  
News Group: comp.databases.oracle.server
On 22.01.2015 15:21, ddf wrote:
>> I do not have time to read Joel's reference at the moment so my apologies if this is in the material but I suspect you can see this in the block dump of the table/index.  That is I expect there is flag set that will show in the dump but I do not have time to test this either but it should only take you a few moments to find out.
>> HTH -- Mark D Powell --
>
> A block dump will show compressed data by virtue of the row length; compressed data will show 'abnormally' short lengths (as compared to what the length SHOULD be based on the column definitions plus overhead).  Actually the data isn't compressed, it's 'de-duplicated'.  It's an interesting mechanism, described by Jonathan in the provided link.  Suffice it to say the repeating values are 'cataloged' into a 'table' and each occurrence of a given token is replaced by its 'identifier' as referenced by the token 'table'.  You need to read Jonathan's post in its entirety; do not rely on my synopsis.  Jonathan also takes you through the entire process of generating and reading a binary block dump so be sure to read and understand that as well.
> David Fitzjarrell

thank you all for pointing me to these articles. I started reading them 
and even I won't be allowed to do block dump I see there is a lot of 
useful information. I thought that maybe there was a way to tell 
compressed/uncompressed by executing some procedure but I also 
appreciate learning something new.

thank you,
geos

Vote for best answer.
Score: 0  # Vote:  0
Date Posted: 22-Jan-2015, at 10:55 PM EST
From: geos
 
Re: basic compression  
News Group: comp.databases.oracle.server
On Friday, January 16, 2015 at 12:01:10 PM UTC-7, Mark D Powell wrote:
> On Thursday, January 15, 2015 at 3:58:56 PM UTC-5, geos wrote:
> > initally a table was created with basic compression and loaded using 
> > direct load insert. during table usage time there were performed some 
> > updates, some non-direct load inserts, deletions etc. so after a while a 
> > table is composed of compressed and uncompressed blocks living altogether.
> > 
> > do you know if is it possible to tell whether a particular block is 
> > stored in compressed or uncompressed form when using basic compression 
> > for a table? is this information available at the block level with the 
> > help of some oracle system functions that operate on blocks?
> > 
> > thank you
> > geos
> 
> I do not have time to read Joel's reference at the moment so my apologies if this is in the material but I suspect you can see this in the block dump of the table/index.  That is I expect there is flag set that will show in the dump but I do not have time to test this either but it should only take you a few moments to find out.
> 
> HTH -- Mark D Powell --

A block dump will show compressed data by virtue of the row length; compressed data will show 'abnormally' short lengths (as compared to what the length SHOULD be based on the column definitions plus overhead).  Actually the data isn't compressed, it's 'de-duplicated'.  It's an interesting mechanism, described by Jonathan in the provided link.  Suffice it to say the repeating values are 'cataloged' into a 'table' and each occurrence of a given token is replaced by its 'identifier' as referenced by the token 'table'.  You need to read Jonathan's post in its entirety; do not rely on my synopsis.  Jonathan also takes you through the entire process of generating and reading a binary block dump so be sure to read and understand that as well.


David Fitzjarrell

Vote for best answer.
Score: 0  # Vote:  0
Date Posted: 22-Jan-2015, at 6:21 AM EST
From: ddf
 
Re: basic compression  
News Group: comp.databases.oracle.server
On Thursday, January 15, 2015 at 3:58:56 PM UTC-5, geos wrote:
> initally a table was created with basic compression and loaded using 
> direct load insert. during table usage time there were performed some 
> updates, some non-direct load inserts, deletions etc. so after a while a 
> table is composed of compressed and uncompressed blocks living altogether..
> 
> do you know if is it possible to tell whether a particular block is 
> stored in compressed or uncompressed form when using basic compression 
> for a table? is this information available at the block level with the 
> help of some oracle system functions that operate on blocks?
> 
> thank you
> geos

I do not have time to read Joel's reference at the moment so my apologies if this is in the material but I suspect you can see this in the block dump of the table/index.  That is I expect there is flag set that will show in the dump but I do not have time to test this either but it should only take you a few moments to find out.

HTH -- Mark D Powell --

Vote for best answer.
Score: 0  # Vote:  0
Date Posted: 16-Jan-2015, at 11:01 AM EST
From: Mark D Powell
 
Re: basic compression  
News Group: comp.databases.oracle.server
On Thursday, January 15, 2015 at 12:58:56 PM UTC-8, geos wrote:
> initally a table was created with basic compression and loaded using 
> direct load insert. during table usage time there were performed some 
> updates, some non-direct load inserts, deletions etc. so after a while a 
> table is composed of compressed and uncompressed blocks living altogether.
> 
> do you know if is it possible to tell whether a particular block is 
> stored in compressed or uncompressed form when using basic compression 
> for a table? is this information available at the block level with the 
> help of some oracle system functions that operate on blocks?
> 
> thank you
> geos

I wouldn't know, but there might be some clues in Jonathan's series: http://allthingsoracle.com/compression-oracle-basic-table-compression/

jg
-- 
@home.com is bogus.
Brian Williams and Dick Pope.

Vote for best answer.
Score: 0  # Vote:  0
Date Posted: 15-Jan-2015, at 3:23 PM EST
From: Joel Garry