Literature DB >> 27026872

High capacity data hiding scheme based on (7, 4) Hamming code.

Zekun Cao1, Zhaoxia Yin2, Honghe Hu1, Xiangping Gao1, Liangmin Wang1.   

Abstract

Aiming to embed large amount of data while minimize the sum of costs of all changed pixels, a novel high capacity data hiding scheme based on (7, 4) Hamming code is realized by a family of algorithms. Firstly, n (n = 1, 2, 3) cover pixels are assigned to one set according to the payload. Then, 128 binary strings of length seven are divided into eight sets according to the syndrome of every binary string. Binary strings that share the same syndrome are classified into one set. Finally, a binary string in a certain set determined by the data to be embedded is chosen to modify some of the least significant bits of the n cover pixels. The experimental results demonstrate that the image quality of the proposed method with high embedding payload is superior to those of the related schemes.

Entities:  

Keywords:  Data hiding; Embedding capacity; Hamming code; Image quality

Year:  2016        PMID: 27026872      PMCID: PMC4766181          DOI: 10.1186/s40064-016-1818-0

Source DB:  PubMed          Journal:  Springerplus        ISSN: 2193-1801


Background

Data hiding, frequently interchangeably referred to as information hiding, is the art of embedding additional data in a certain carrier (Zielińska et al. 2014). These carriers are typically digital media files transmitted on the Internet, such as images, audios, videos, or text (Ker et al. 2013). Historically, the design of data hiding schemes for digital images has heavily relied on heuristic principles (Feng et al. 2015; Hong et al. 2015; Qian and Zhang 2015; Xia et al. 2014a, b). The current trend calls for constraining the embedding changes to image segments with complex content. Such adaptive data hiding schemes are typically realized by first defining the cost of changing each pixel and then embedding the additional data while minimizing the sum of costs of all changed pixels. One of the explorations to achieve this goal is applying the error correcting code to data hiding, and many researchers have done a lot of research in this area (Chang and Chou 2008; Chen et al. 2013; Liu 2007; Ma et al. 2013; Wang 2009; Yin et al. 2010; Zhang et al. 2007; Zhu et al. 2010). Crandall originally proposed a data hiding scheme named matrix encoding (Crandall 1998) in 1998. In this scheme, k bits were embedded into 2 − 1 cover pixels by modifying the least significant bits (LSBs) of one pixel. The embedding capacity reached k/(2 − 1) bit per pixel (bpp). Based on the matrix encoding, Zhang et al. (2007) proposed the “Hamming+1” scheme in 2007. Compared with the matrix encoding scheme, it used one more cover pixel to embed one more bit while the cost remain unchanged. Thus, the embedding capacity got increased to be (k + 1)/2 bpp. Later, Chang et al. proposed a new scheme (Chang and Chou 2008) based on the idea of classification in 2008. Binary strings were assigned into eight sets. A binary string of length 2 − 1 in a specific set was selected out to embed k bits. It presented a new idea in applying Hamming code to data hiding. But the embedding capacity didn’t get improved compared with the previous two scheme. It is equal to the embedding capacity of the matrix encoding scheme (Crandall 1998). The marked-image quality of the aforementioned schemes is ideal when the embedding payload is low (no more than k/(2 − 1) or (k + 1)/2 bpp), but it degrades hardly with the increase of the embedding payload. Against this problem, a new data hiding scheme based on (7, 4) Hamming code is proposed in this paper. The marked-image quality of the proposed scheme is superior to those of the related works in Crandall (1998), Zhang et al. (2007) and Chang and Chou (2008) under a high embedding payload.

Related works

The Hamming code

An error correcting code could not only detect that errors have occurred but also locate the error positions. Hamming code is a linear error correcting code that can detect and correct single bit errors. The (n, n − k) Hamming code uses n cover bits to transmit n − k message bits, and the other k bits used for error correcting purpose are called parity check bits, where n = 2 − 1 on the binary filed. S = {C1, C2, …,C} is a set of code words. The number of elements of S, denoted as |S|, is called the cardinality of the code. For any two code words x = (x1, x2, …, x) ∊ S and y = (y1, y2, …, y) ∊ S, the Hamming distance is defined by d(x, y) = |{i|x ≠ y}|. The minimum distance of the code S is defined as dmin = min {d(x, y)|x, y ∊ S}. And the covering radius of the code S is r if any binary string u = (u1, u2, …, u) differs from at least one code word x = (x1, x2, …, x) ∊ S in at most r positions. The minimum distance dmin measures the error-correcting capability, and the maximum distortion that occurs when a binary string is replaced by a proper code word is measured by the covering radius r. Therefore, a large value of the minimum distance dmin is preferable to the purpose of error correction whereas a small value of the covering radius r is preferable to the purpose of steganography. The (7, 4) Hamming code is a binary code of length n = 7, with cardinality |S| = 16, minimum distance dmin = 3, and covering radius r = 1. The (7, 4) Hamming code is now taken as an example to demonstrate how Hamming code correct an error bit. Suppose that the message bits are m = (1010). First, the code generator matrix is used to form n cover bits C as follows. Next, the code word C is transmitted to a receiver via a noise communication channel. Supposed that the received code word is C′ = (1011010). Then the parity check matrix is used to compute the syndrome vector for checking an error as follows. The vector is identical to the fourth column of the parity check matrix . Thus, an error is detected at the fourth position of C′, and C′ is corrected by C′ = C′ ⊕ e4 = (1010010), where ⊕ is the exclusive-or operation, and e, the error pattern, is a unit vector of length n with a “1” located at the i-th position. If the syndrome vector is , the receiver can conclude that no error has occurred.

“Matrix Encoding”

In the matrix encoding scheme, a string of k bits is embedded into a group of n cover pixels by adding or subtracting one to or from at most one cover pixel, where n = 2 − 1. Firstly, the syndrome vector is calculated by , with and LSB (p) means the least significant bit of i-th pixel p. is the parity check matrix of the (n, n − k) Hamming code. is the transpose operation, and ⊕ is the exclusive-or operation. Next, if the computed syndrome vector is (0, 0,…, 0), then the group of n marked pixels is set to be equal to ; otherwise, find the i-th column of that is equal to the transposed syndrome vector . The group of n marked pixels is calculated by , where is a unit vector of length n with “1” located at the i-th position. At the receiving side, a receiver can extract the original binary string from the received group by .

“Hamming+1”

Zhang et al. proposed the “Hamming+1” scheme (Zhang et al. 2007) to embed a string of (k + 1) secret bits into a group of (n + 1)ψ cover pixels , where n = 2 − 1, by modifying at most one cover pixel as follows.where is the parity check matrix of the (n, n − k) Hamming code, T is the transpose operation. This means that the first kψ secret bits of are embedded into the first n bits of by using matrix encoding, and the last secret bit of is embedded by using the function of nψ cover pixels . The embedding rules proposed in (Zhang et al. 2007) are as follows. If Eq. (1) does not hold, then is kept unchanged, and one cover pixel (1 ≤ i ≤ n) needs to be increased or decreased by one to make Eqs. (1) and (2) hold simultaneously. If (1) holds and (2) does not, the first n pixels are kept unchanged and last cover pixel is randomly increased or decreased by one. At the receiving side, a receiver can extract the first kψ secret bits of by applying the extracting way of the matrix encoding scheme and the last secret bit of can be extracted by using Eq. (2).

“Nearest Code”

In the nearest covering code scheme (Chang and Chou 2008), all possible combinations of seven bits are classified into eight sets G0, G1, …G7. There are 16 elements in each set G, where 0 ≤ u ≤ 7. And satisfies equation , where 0 ≤ v ≤ 15, is the parity check matrix of the (7, 4) Hamming code, T is the transpose operation. A covering code with nearest Hamming distance to is selected in according to secret bits , where the subscript of is equal to the corresponding decimal number of . Then, the cover pixels are modified by . At the receiving side, a legal receiver can extract the original secret bits from the received group of 7 pixels by .

The proposed scheme

In the proposed scheme, a secret binary string of length three is in a mapping relationship with the error pattern of the (7, 4) Hamming code and then can be embedded into a group of cover pixels. The number of the cover pixels in different groups varies under different embedding payload.

The preparations

I is the cover image sized H × W, and marked_I is the marked-image with data D = {d1, …, d}embedded, where d ∊ {0, 1}, 1 ≤ i ≤ L. is a parity check matrix of the (7, 4) Hamming code. A string of binary bits (b1b2…b7) is the cover of a string of three bits (ddd), and is the marked-string of (b1b2…b7). p is the i-th pixel in cover image, and p′ is the i-th pixel in marked-image. p represents the j-th least significant bit of pixel p. ER, i.e. embedding rate, is calculated as follows. N is the number of groups that n (n = 1, 2, 3) cover pixels are used to embed a three bits string. And N satisfies Formula (4). The first equation of Formula (4) indicates that the number of bits to be embedded is equal to the amount of bits the cover image could bear under a particular embedding rate. And the second equation in Formula (4) requires that the cover pixels we need are less or equal to the pixels the cover image could provide. To modify the cover pixels as less as possible, Formula (4) is processed to obtain Formula (5) based on the following considerations. The top priority scheme, grouping three cover pixels together to embed a three bits string, satisfies Formula (4) when 0 ≤ ER ≤ 1. When 1 < ER ≤ 1.5, grouping two cover pixels to embed a binary string satisfies Formula (4), but there will be some pixels in the cover image unused. Instead, we embed some secret binary strings into groups of three cover pixels and the others into groups of two cover pixels. Obviously, this scheme causes less modification to cover image than the scheme that only using two cover pixels to embed binary strings. Likewise, we embed some binary strings into groups of two cover pixels and the others into groups of one cover pixel when 1.5 < ER < 3. Therefore, adaptive N is calculated by Formula (5), which contributes to minimize the sum of costs of all changed pixels.

The data embedding phase

All binary strings of length seven are classified into eight sets G0, G1, …G7. There are 16 elements in every set , and satisfies equation , where 0 ≤ u ≤ 7. Specific embedding algorithms are as follows. Algorithm 1: Internal embedding algorithm Input:, a string of 3 data bits Output: Step 1: Find a which satisfies in , where ; Step 2:. Algorithm 2: External embedding algorithm Input:I, D Output:marked_I Step 1: Calculate ER according to Eq. (3) and N1, N2, N3 by Formula (5); Step 2:, then call Algorithm 1 to get , and , ; Step 3:, then call Algorithm 1 to get , and , , ; Step 4:, then call Algorithm 1 to get , and , , , .

Example: data embedding

An example is now given to demonstrate the embedding phase of the proposed scheme. Suppose that I is a grayscale image with H × W = 3 × 3 shown in Fig. 1 and .
Fig. 1

Cover image I

Cover image I Step 1: We could calculate ER = 2, then work out N1 = 3, N2 = 3, N3 = 0 by Formula (5). Step 2: From the cover image, we know p1 = 162, . Then call Algorithm 1 to get , so . Repeat Step 2 (N1 − 1) times to embed (d4d5d6) into p2 = 164 and (d7d8d9) into p3 = 165. Step 3:, , . Call Algorithm 2 to get , so , . Repeat Step 3 (N2 − 1) times to embed (d13d14d15) into (p6, p7) = (155, 164) and (d16d17d18) into (p8, p9) = (150, 152). Finally, we get the marked-image marked_I shown in Fig. 2.
Fig. 2

Marked-image marked_I

Marked-image marked_I

The data extracting phase

Algorithm 3: Data extracting algorithm: Input:marked_I, ER Output:D Step 1: Calculate the value of by Formula (5) according to ER. Step 2:, , . Step 3:, , . Step 4:, , .

Example: data extracting

Suppose the receiver receives the marked-image sized H × W = 3 × 3 shown in Fig. 2 and knows that the embedding rate (ER) is 2 bpp. Step 1: Work out N1 = 3, N2 = 3, N3 = 0 by Formula (5). Step 2:, . Repeat Step 2 (N1 − 1) times to extract the bits embedded in . Step 3:, , . Repeat Step 3 (N2 − 1) times to extract the bits embedded in .

Experiment results

To evaluate the performance of the proposed scheme, we simulate the “Matrix Encoding” (Crandall 1998), the “Hamming+1” (Zhang et al. 2007), the “Nearest Code” (Chang and Chou 2008) and the proposed scheme by software Matlab R2014a. Standard grayscale test images sized 512 × 512 are used in the simulations, as shown in Fig. 3.
Fig. 3

The nine test images

The nine test images

Preprocessing

In order to make comparison objectively and fairly, the embedding capacity of “Matrix Encoding” (Crandall 1998), “Hamming+1” (Zhang et al. 2007) and “Nearest Code” (Chang and Chou 2008) are also enhanced by extending the least significant bit to general LSBs. The extension method of “Matrix Encoding” is as follows. Every 3-bit string is embedded into which is composed of the i-th least significant bit of 7 pixels by the matrix encoding method. Thus, the embedding capacity of the extended “Matrix Encoding” method become 3 bpp. The “Hamming+1” scheme is extended as follows. Every 4-bit string is embedded into , composed of the (2i − 1)-th and 2i-th least significant bits of 8 pixels, using the “Hamming+1” method. Thus, the embedding capacity of the extended “Hamming+1” scheme become 2 bpp. Also, the extension method of the “Nearest Code” is as follows. Every 3-bit string is embedded into which is composed of the i-th least significant bit of 7 pixels by the “Nearest Code” method, making the embedding capacity of the extended “Nearest Code” method be 3 bpp. To be fair to compare with the related works, the same method used in obtaining Formula (5) is applied here to process the extended “Matrix Encoding”, “Hamming+1” and “Nearest Code” to be adaptive to the payload as follows. The extended “Matrix Encoding”: The extended “Hamming+1”: The extended “Nearest Code”: where, N represents the groups of data bits embedded in .

Image quality

PSNR (Peak Signal to Noise Ratio) is widely used to measure the image quality of marked-images by calculate the difference between the marked-image and the cover image, which is defined as follows. The above equations demonstrate that the smaller the difference between the marked-image and cover image is, the greater the PSNR value is. In general, if a marked-image with PSNR value greater than 30 dB, the distortion of the marked-image is hard to be detected by human eyes. Tables 1, 2, 3 and 4 show the PSNR values of marked images generated by different methods with several payloads, i.e. ER = 1 bpp, ER = 1.5 bpp, ER = 2 bpp and ER = 3 bpp. The data in tables are the mean value of ten independent experiments. And data bits embedded into images are generated randomly. From the tables, it’s obvious that the PSNR values of the proposed scheme are higher than those of the related works. It indicates that the marked-image quality of the proposed scheme is superior to those of the related works under the same payload.
Table 1

The PSNR comparison of different methods with ER = 1 bpp

LenaBaboonManTiffanyPeppersBoatJetSailboatSplash
Matrix encoding47.0247.0247.0347.0247.0247.0147.0447.0147.03
Nearest code47.0247.0247.0147.0347.0247.0147.0447.0147.03
Hamming+145.1445.1445.0145.1045.1445.1445.1445.1445.13
Proposed scheme51.1451.1451.1451.1551.1451.1451.1551.1451.14
Table 2

The PSNR comparison of different methods with ER = 1.5 bpp

LenaBaboonManTiffanyPeppersBoatJetSailboatSplash
Matrix encoding39.9039.9239.9239.9339.9239.9439.9339.9339.88
Nearest code39.9039.9139.9239.9339.9139.9439.9339.9339.88
Hamming+133.0833.1032.7332.7733.0433.0533.0833.0933.01
Proposed scheme46.3746.3746.3746.3846.3746.3646.3846.3746.37
Table 3

The PSNR comparison of different methods with ER = 2 bpp

LenaBaboonManTiffanyPeppersBoatJetSailboatSplash
Matrix encoding33.1033.0833.0633.0933.0533.1033.1033.0633.18
Nearest code33.1133.0733.0633.0933.0633.1033.1033.0733.18
Hamming+120.6220.8520.2519.7820.6320.7019.9820.2720.54
Proposed scheme41.6141.6041.6241.5741.6041.5841.6741.5941.62
Table 4

The PSNR comparison of different methods with ER = 3 bpp

LenaBaboonManTiffanyPeppersBoatJetSailboatSplash
Matrix encoding19.8019.7719.6319.8719.8919.7020.0720.0919.82
Nearest code19.8019.7719.6419.8719.8919.6920.0820.0919.81
Hamming+1
Proposed scheme37.9237.9237.9237.9137.9237.9237.9837.8937.94
The PSNR comparison of different methods with ER = 1 bpp The PSNR comparison of different methods with ER = 1.5 bpp The PSNR comparison of different methods with ER = 2 bpp The PSNR comparison of different methods with ER = 3 bpp The PSNR-ER comparison results of Lena and Baboon are shown in Figs. 4 and 5. From the figures, the PSNR values of the proposed scheme are slightly lower than those of the extended “Matrix Encoding”, “Hamming+1” and “Nearest Code” schemes when the embedding rate is relatively small, but while the embedding rate gets greater, the PSNR values of the proposed scheme are significantly higher than those of the other methods. By the way, the curves of the “Extend Matrix Encoding” and the “Extend Nearest Code” are completely overlapped, because both of the two methods embed three bits by modifying one bit. Thus, only the results of the extended “Matrix Encoding” scheme are shown in the next experiment results.
Fig. 4

PSNR-ER comparison of Lena

Fig. 5

PSNR-ER comparison of Baboon

PSNR-ER comparison of Lena PSNR-ER comparison of Baboon Take Lena and Baboon for examples, the marked-images of the extended “Matrix Encoding”, the extended “Hamming+1”and the proposed scheme under different payloads are shown in Figs. 6 and 7. We can see that there is no distinct difference between the marked-images when the embedding rate is 3/7 bpp. When the embedding rate is up to 2 bpp, we can see spots easily on the marked-image of the extended “Hamming+1” scheme, and can hardly see any spot on the marked-image of the proposed scheme. The same observations can be found between the proposed scheme and the extended “Matrix Encoding” scheme while ER = 3 bpp.
Fig. 6

Marked-images of Lena under various payloads. a ER = 3/7 bpp. b ER = 2 bpp. c ER = 3 bpp

Fig. 7

Marked-images of Baboon under various payloads. a ER = 3/7 bpp. b ER = 2 bpp. c ER = 3 bpp

Marked-images of Lena under various payloads. a ER = 3/7 bpp. b ER = 2 bpp. c ER = 3 bpp Marked-images of Baboon under various payloads. a ER = 3/7 bpp. b ER = 2 bpp. c ER = 3 bpp

Security analysis

Security is a significant problem for data hiding. Many steganalysis methods uses statistics tools to analysis the pixel value distribution on a suspicious image for cracking the secret message delivery. From this point of view, we analysis the pixel histograms between the test cover images and the marked images to measure the security of the data hiding methods. Take a smooth content image Lena and a complex content image Baboon for example, the pixel histogram results generated by the extended “Matrix Encoding” scheme, the extended “Hamming+1” scheme, and the proposed scheme with high payloads are shown in Fig. 8. From Fig. 8, the pixel histogram of the marked image generated by the proposed scheme is closer to the pixel histogram of the original image than those of the extended “Matrix Encoding” and extended “Hamming+1” scheme. It demonstrate that the security performance of the proposed scheme is better than the extended “Matrix Encoding” and the extended “Hamming+1” scheme.
Fig. 8

The pixel histogram analysis comparison of Lena and Baboon. a “Matrix Encoding” of Lena with ER = 2 bpp. b “Hamming+1” of Lena with ER = 2 bpp. c “Matrix Encoding” of Lena with ER = 3 bpp. d “Matrix Encoding” of Baboon with ER = 2. e “Hamming+1” of Baboon with ER = 2 bpp. f “Matrix Encoding” of Baboon with ER = 3

The pixel histogram analysis comparison of Lena and Baboon. a “Matrix Encoding” of Lena with ER = 2 bpp. b “Hamming+1” of Lena with ER = 2 bpp. c “Matrix Encoding” of Lena with ER = 3 bpp. d “Matrix Encoding” of Baboon with ER = 2. e “Hamming+1” of Baboon with ER = 2 bpp. f “Matrix Encoding” of Baboon with ER = 3

Conclusions

Based on (7, 4) Hamming code, a novel high capacity data hiding scheme is proposed. Cover pixels are matched adaptively to embed data according to different embedding payloads. Compared to the related works, the image quality under high payload gets improved significantly while maintaining visual quality under low payload. Because of the use of pixel matching, a seed can be also used to match pixels to improve the security. Moreover, this method is not limited to grayscale images, but can be also applied to color images, compressed images, audios, videos and other digital media. Future works include investigating this scheme on other error correcting codes and improving the data embedding efficiency further.
  1 in total

1.  Dynamic adjustment of hidden node parameters for extreme learning machine.

Authors:  Guorui Feng; Yuan Lan; Xinpeng Zhang; Zhenxing Qian
Journal:  IEEE Trans Cybern       Date:  2014-06-05       Impact factor: 11.448

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.