The optimization process of communication systems having an iterative structure somewhere within the transmission, in the particular case studied, it is in the receiver, is a delicate process. The main intention is to match the behavioral peculiarities of individual components in order to achieve a performance gain through their interaction. However, the component-wise optimization with regard to each other mostly pays off when specific prerequisites, which often lead to trade-offs (if the system has to work for various conditions), are met. Hence, system designers are interested in methods that can enable a straight-forward match while covering a broader set of environments.
With the (recent) rise of the so-called Low-Density Parity-Check codes (LDPCC), a very powerful kind of forward error correction codes has entered almost every recent wireless communication standard. Though these codes have widely known benefits, the researcher’s desire for improvement (e. g., reducing gap to capacity, reducing complexity) is without limits. This eventually leads to the application of paradigms, which were previously established for other codes, to LDPCC and the introduction of LDPC Convolutional Codes (LDPCCC).
This thesis tries to tackle the issue of straight-forward matching of a decoder to an equalizer, in a so-called turbo receiver, by the utilization of a particular type of LDPCCC. This kind of code is derived from well-studied structural templates of block code variants, named protographs, through a multi-step derivation process. The steps involved are studied with respect to parameters affecting the code and its performance. As a first step, the basic protograph is unwrapped to a composition graph that reflects the structural distinctiveness of convolutional codes of band parity-check matrix. Here, the syndrome former memory and the termination length are the tunable variables that have behaviors that partially counteracts. Then, an adaptation of the code to the equalizer is feasible, but only within certain limits. The extrinsic information gained for small to medium amounts of a-priori information can be raised by increasing the syndrome former memory, due to the introduction of lower degree nodes that can provide higher reliability. In turn, the enlargement of the composition graph lowers this by installing higher degree nodes between the lower degree nodes at the ends of the graph. Technically speaking, the termination length of the composition graph, which is referred to as the Terminated Convolutional Protograph, increases. However, they help reduce the rate loss induced by the addition of low degree nodes. A recommendation of the parameter set preferred depends on the operating point suggested.
In the next step, the TCPG is lifted to the final code (that are named Protographbased Low-Density Parity-Check Convolutional (PG-LDPCC) codes), where the nodes are duplicated and the edges are permuted. Here, the size of these permutations is important since it, along with the other two variables mentioned before as well, is related to the final block size. Eventually, the system designer has a set of conditions to be satisfied along with and a set of tools for creating codes spanning a design space and has to choose from it. For such a code design challenge, ensemble maps are introduced in order to provide the overview tool at hand.
Furthermore, the opportunity to adapt to certain prerequisites is extended to the study of puncturing methods for the special case of PG-LDPCCCs. Since the derivation process is multi-tiered, puncturing can be applied to different code representations at each stage. In light of this multi-tiered challenge, the error probability distribution over the graph is examined, in particular its fixed point behavior. A modification of extrinsic transfer curve is also feasible by puncturing, but the influence of the puncturing scheme on the transfer curve is also very limited.