python - Size of FITS file different before and after processing -
i have problem file size after processing...i wrote script create edited image...(from raw image data deduct flat field image data , dark image data)...here code convert float numpy array big endian , problem is...at beggining have fits file size 2.8mib type >2i...after processing have fits file 11mib , type float64 , dont know why? in idl fix method http://www.exelisvis.com/docs/fix.html . in python use imgg=imgg.astype(np.int16,copy=false).so image file 2.8mib in white black color...
any suggestion please?
if header image contains non-trivial values optional bscale and/or bzero keywords (that is, bscale != 1 and/or bzero != 0), raw data in file must rescaled physical values according formula:
physical_value = bzero + bscale * array_value
as bzero , bscale floating point values, resulting value must float well. if original values 16-bit integers, resulting values single-precision (32-bit) floats. if original values 32-bit integers resulting values double-precision (64-bit floats).
this automatic scaling can catch of guard if you’re not expecting it, because doesn’t happen until data portion of hdu accessed (to allow things updating header without rescaling data). example:
>>> hdul = fits.open('scaled.fits') >>> image = hdul['sci', 1] >>> image.header['bitpix'] 32 >>> image.header['bscale'] 2.0 >>> data = image.data # read data memory >>> data.dtype dtype('float64') # got float64 despite bitpix = 32 (32-bit int) >>> image.header['bitpix'] # bitpix automatically update -64 >>> 'bscale' in image.header # , bscale keyword removed false
the reason once user accesses data may manipulate , perform calculations on it. if data forced remain integers, great deal of precision lost. best err on side of not losing data, @ cost of causing confusion @ first.
if data must returned integers before saving, use imagehdu.scale method:
>>> image.scale('int32') >>> image.header['bitpix'] 32
alternatively, if file opened mode='update' along scale_back=true argument, original bscale , bzero scaling automatically re-applied data before saving. not desirable, when converting floating point unsigned integer values. may useful in cases raw data needs modified corresponding changes in physical values.
to prevent rescaling occurring @ (good updating headers–even if don’t intend code access data, it’s err on side of caution here), use do_not_scale_image_data argument when opening file:
>>> hdul = fits.open('scaled.fits', do_not_scale_image_data=true) >>> image = hdul['sci', 1] >>> image.data.dtype dtype('int32')
Comments
Post a Comment