Flip it Open aims to fund the open access publication of 128 titles through typical purchasing habits. Once titles meet a set amount of revenue, we have committed to make them freely available as open access books here on Cambridge Core and also as an affordable paperback. Just another way we're building an open future.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
One of the core ideas of non-coherence theory is that human rights standards offline appear at variance with the online image. Human rights standards change in the course of the transposition, where the online image of offline standards may be widely distorted, given the subject, or only exhibit minor variance. Among several social science theories focusing on how values and expectations become relative once one constructs a theory on the basis of legal subject-hood, the capabilities approach seems to have some similarities with non-coherence theory. Two aspects are of main interest here. The first is related to the dependency of normative standards applied to a legal subject upon this subject’s capabilities. And the second concerns the matter of how the element of normative reasonability is constructed. The first aspect is positivist and the second is purely non-positivist. The unclear causal chains between capability, reasonability and subsequent activity are the weakest elements of the capabilities approach.
Non-coherence theory says that in order to speak of legal certainty in the digital domain, a domain-specific concept of justice has to be developed. But this would already lead to a far-reaching conclusion that there can exist different ideas of justice, which is a phenomenon of relativity reaching to the deepest bottom of human rights systems. A commonality would therefore need to be found between the concepts of legal certainty and predictability. Such commonality can be found from the functions of legal certainty and predictability, where the outcome of a judicial or quasi-judicial assessment assumes the existence of standards which are constant. Pursuing this commonality leads to the term the thesis of immunity to time. We observe a specific phenomenon of non-coherence, which is a simplification and move towards thinness because predictability can, by and large, be considered legal certainty without qualitative elements, such as the ideas of justice or equality.
The epistemic human rights dimension concerns how knowledge about human rights issues is generated, articulated in discourse and normativity, and finally enforced in practice. According to the Aristotelian theory of epistemology, this represents the side of subjective thought about objective reality. Within the non-coherence approach, the fundamental question is whether the methods of knowledge creation, articulation and enforcement about human rights issues in the digital realm remain similar to the offline realm, or whether distortion exists. Two concerns emerge, which will be decisive for some conclusions about human rights knowledge creation in the digital domain through the non-coherence lens. The first is whether the digital domain has any interest in claiming normative validity of human rights rules in the first place, and second, whether the meaning of truth in the digital and non-digital domains contains similar features.
The academic debate which emerged after the first introduction of the IBF concerns either directly the formula, or matters closely connected to the balancing of conflicting rights by private online portals. The article reviews some positions and offers alternative views. In general, capabilities variance leads to an image of weakened or absent legitimacy of online content assessment only through the lens of the offline assessment framework, that is, when one wishes to ask the same questions and apply similar tools towards both domains. Legitimacy in the online domain may, however, originate from enforced consent, or it may have a default and undefined origin, which may lead to questioning the suitability of the legitimacy language online and its overall scope.
Since the digital space enables close to complete transparency, and the perpetuation of the transparency through data storage and not forgetting, it could be said that human rights law measures in the digital domain enable as high a transparency as possible, and have to be given the quality label. This in turn also means that almost anything which the private domain puts forward as self-normativity, and any practice implemented to enhance transparency, would without hesitancy be viewed as positive features. Digital transparency here becomes a shield against any critique of self-normativity and nonaccountability. The higher the degree of supposed transparency and openness in the digital domain, the stronger the immanence that many features remain hidden because in the conditions of ideal transparency the term loses its meaning, and it can only be defined in combination with the opposite. Complete or close to complete transparency has to mean the simultaneous existence of complete or close to complete non-transparency. Since the immanent feature of complete digital transparency is positivity on the surface, there is also non-transparent negativity beneath the surface.
The question of legitimation, according to the analysis of the Frankfurt scholars, contains an unanswered aspect pertaining to the entire normative approach. This is about the final arbiter to decide whether a regulatory system can claim legitimacy. It can be logically suggested that such a final arbiter needs to be positioned outside of the normative system in question. Once we can agree that the natality aspect of fundamental rights is lost in the transfer from the non-digital domain, we see that the Frankfurt school’s aspiration towards the general explanatory power is lost as well. The relational chain stops after the creation of the digital narrative; that is, normativity does not follow as a logical result from internal development inside the digital domain. Despite the relational chain being broken, normativity exists in and for the digital domain. If not a result of the development as proposed by the Frankfurt school, there have to be other explanatory venues. The main among these is the concept of self-normativity.
The chapter responds to Alexy critique towards the Internet Balancing Formula. Non-acceptance of Robert Alexy’s critique originates from the rejection of the proposition that at any given time only one abstract weight of a particular fundamental right may exist. On the contrary, there can exist multiple abstract weights of one fundamental right at any given time, and for the following reasons. The first reason concerns the source of the abstract weight of a fundamental right, that is, whether it is determined through explication or whether it originates from the ideal dimension of human rights. The second reason is related to the relativity and interdependence of the terms abstract weight and intensity of interference. The third reason for disagreement concerns the general aspect of human rights development. At the present stage in the IBF, the aspect of empathy should be applied in stalemate cases or in difficult cases. For the purposes of the IBF, the element of empathy can be labelled the irrationality thesis.
Network society theory stipulates that the concrete content for judicial concepts and legal norms is derived through the network. This means that the network-given meaning may and may not coincide with the original meaning at the time of its genesis – attributed to these concepts and norms when they were coined. The main objection against applying the network approach to digital human rights is the loss of the claim to legitimacy in the course of regulation. This appears in different versions, which at first sight are polarised and irreconcilable. The first version claims that the legitimacy argument is lost in networks, and the second claims to the contrary that networks are capable of providing human rights legitimacy. Fukuyama has suggested trust as a characterising feature of networks. The transposition of the element of trust from offline to online networks deprives trust of its original semantic meaning, since trust then would not be something which has to be earned but something which is given. Non-coherence theory explains that this is what happens when concepts are automatically taken from the offline realm and planted in the online.
There are two fundamentally related perceptions, which are of a religious nature, about the e-state from the perspective of fundamental rights. The first hails e-state positivity by suggesting that public power draws closer to the people because this framework is providing more efficient, transparent and neutral procedures and outcomes. Here fundamental rights would therefore appear to be more safeguarded since e-solutions minimise arbitrariness. The opposite perception suggests there is obscurity and distrust despite the apparent efficiency and neutrality. Four caveats appear in relation to the image of e-state through the non-coherence theory. These are: the dominance of human rights rhetoric, questionability of the success of the e-state, the expectation of horizontality and the potential road to a police state.
The idea of non-coherence theory originates from the distorted image of well-established human rights in digital settings. This distorted image appears in various ontological and epistemic aspects. It reveals an absence of clarity on whether human rights rules and principles, the possibility of their realisation and related obligations and remedies against violations as established in the offline world (human rights law and practice as we know it) continue to exist online with or without variance. If variance exists, what is the degree and consequences of such variance, whether it amounts to distortion, and if distorted, whether its degree calls into question the feasibility or limits the scope of the transposability of offline human rights law and practice to online. The ambition of the non-coherence theory of digital human rights therefore lies, among many other characteristics, in providing a conceptual framework for understanding the implications from the co-existence of multiple internet governance models.
The idea that human rights offline and online are the same belongs to the sphere of human rights religious reasoning. As such, it is the representation of faith located far and above the reach of explicative justification. Religious justifications as a rule remain immune to any contrary argumentation. The correctness of the sameness statement seems to remain unquestioned from its appearance. The doubt about generality allows us to say that the statement of the sameness of human rights in the digital and non-digital domains is true at the highest level of generality and abstractness. Here human rights in these domains are coherent, and non-coherence becomes more and more apparent in the image of digital human rights as we progress towards increasing concreteness. At some point non-coherence disappears when human rights specific to the digital domain emerge which are not a reflection of a comparable right off-line.
Objections against digital self-normativity are primarily related to questioning whether a moral dimension is embedded in the normative function of algorithms, and the increase in predictive power connected to the automatic implementation of norms. These matters concern secondary level rules of implementation and practice but are often thought to reflect the moral dimension of digital primary norms. There appears no comparable continuum between self-made private rules and international or domestic legal instruments governing digital human rights. I term such an absence as the idealism abyss; that is, the idealistic nature inherent in human rights articulated by positive legal instruments is not carried as uninterrupted into the self-normativity of digital agents. Once the self-normativity of digital private enterprises becomes justified, the idealism abyss leads to the necessity of self-constitutionality. In this case, primary and secondary self-regulation form one logical structure. The rejection of the idealism abyss shows an image where the self-made secondary norms rely on primary- (constitutional-) level norms originating from the non-digital realm, but their content may have changed in the course of the transposition.
In this chapter, we present an overview of current knowledge about learners’ use and understanding of connectives. In the first part of the chapter, we will see that connectives are notoriously difficult to master for second language learners, because they require an array of complex competences. Learners must know how to use them appropriately in various genres and registers, have a fine-grained understanding of the meaning differences between connectives used to convey similar coherence relations, and also automatize this knowledge so that it is activated automatically during discourse processing, and not only when they consciously elicit usage rules. In the second part of the chapter, we review the important body of studies that have empirically assessed the causes for learners’ difficulties with connectives, and conclude with some recommendations for teaching. We conclude that research on the second language acquisition of connectives contributes to answering important questions, such as what makes connectives difficult to master, and how they are they used across languages.