Archives

  • 2018-07
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-07
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • 2024-04
  • We generate six versions for each benchmark which are

    2020-08-06

    We generate six versions for each benchmark, which are: Table 1 shows the comparison results of the time and memory overhead for each version of benchmark. Note that Vacquinol-1 synthesis the results of MM and FFT hardened by CFCSS are much lower than the results in the original paper because the ADuC841 lacks the multiplier while MM and FFT employ heavy multiplications, both the execution time and memory size increase, but the MIPS processor adopted by the original paper contains a multiplier. Checking instructions inserted by CFCSS contain only simple XOR and AND operations which does not increase the execution time and memory size. Therefore, the increasing ratio of execution time and memory size decreases accordingly, resulting in that CFCSS has both low time and memory overhead. SCFC gets large time and memory overhead because the number of basic blocks N is larger than 8 (word length of ADuC841), a single operation of the bitmap needs to be divided into multiple operations, which increases the time and memory overhead. ECCA leads to large performance decay due to the numerous multiplications in the checking instructions, which shows obviously in IS and QS because these two benchmarks contain no multiplication. The results of MM and FFT seem better because both benchmarks and checking instructions contain numerous multiplications so that the ratios are pulled down. The time overhead of GTCFC is a little bit larger than CFCSS due to the slow indirect addressing and the extra overhead of virtual basic blocks, but the memory overhead is significantly large due to the linear memory space complexity. BGCFC is the only one that is comparable with CFCSS in both time and memory overhead. Table 2 shows the comparison results of error detection rate of each software-based CFC technique. Each version of benchmark is injected 10000 errors. Even though the injected errors are randomized in spite of their classifications, the large number of injected errors guarantees that all types of CFEs have appeared due to the large amount of tests. Actually in the real aerospace, SEUs also flip the bit randomly so this test is good for simulating the real environment. We have obtained the results from the counter in CPLD. The comparison results show that BGCFC can detect over 93% of all injected errors. The undetected injected errors, which are minority, are data errors or intra-node CFEs. BGCFC does not detect them explicitly. In the results, BGCFC, SCFC and GTCFC achieve high error detection rate, the detection rate of ECCA is variable and CFCSS manifests the lousiest result. The reason is explained as follows. For ECCA, the memory size of inserted checking instructions exceeds the memory size of the original code. Due to the randomness, bit flips are much more possible to occur in inserted checking instructions than in the original code, so the result is unstable. CFCSS lacks checking instructions at the end of basic blocks, and the XOR operation leads to aliasing problem for the basic block with multiple predecessors (named branch-fan-in node in CFCSS), so the result is lousy. Theoretically, the error detection capabilities of BGCFC, SCFC and GTCFC are the same. However, BGCFC, which is proposed in this paper, achieves the highest error detection rate. There are two reasons to explain this phenomenon. First, the memory overhead of BGCFC is the lowest among these three techniques, because of the randomness, the possibility of bit flips in inserted checking instructions is lower than the others. Hence, the error detection rate of BGCFC is the highest. Second, BGCFC fills the unused memory space with instructions “call Error()” while others are not, so it further increases the error detection rate. SCFC manifests better than GTCFC in the similar result, because the former’s memory overhead is less than the latter’s and the checking instructions in the middle of basic blocks help to increase the error detection rate. Therefore, we can conclude that even though the error detection rates are the same in theory, the result is inversely proportional to the memory overhead in practice.