Commentary on Jean-Guy Meunier’s presentation ‘Can Semiotics Become Computational.’

By Michael Mair, May 2023


It is many weeks now since Jean-Guy Meunier’s exposition on semiotics and computation. I was awed by his erudition, and by the systematic way he approached the subject, particularly with the focus on modelling which has become the key concept bridging computer science and its applications. His question is, ‘under which conditions can semiotics become computable’. I note that the webinar was delivered just as the world was waking up to the new possibilities of artificial intelligence (AI) enabled by Large Language Models (LLMs), particularly OpenAI’s GPT-4.  Semioticians, like everyone who works with text are significantly challenged by them. Coming to terms with this new reality has now become an urgent task for semiotics and linguistics in general. 

The Barrier of Meaning

Jean-Guy reminds us that the meaning of a sign is not an intrinsic property of the physical entity itself, but a ‘carrier’ of it, resulting in the ‘wall or barrier of meaning’. Jean-Guy asks ‘how is meaning identified? What is its nature? Why does it exist?’ Is it derived from the data? He suggests that ‘data driven’ methodology is often a misnomer, because concepts are from a standardized set sourced from an ‘epistemic community’ specialized in that kind of research and therefore risk being self-fulfilling. This circular nature of data definitions is overcome by Large Language Models (LLMs): they identify from the data itself the ‘features’ which are the components of their own models, and do not utilize human intentional conceptual modelling. 

Large Language Models

It was the release of the paper ‘Attention is all you need’ (1) in 2017 which led to the rapid advance of LLMs, by first defining the ‘transformer’ neural net architecture. The transformer architecture now underpins the majority of LLMs, including OpenAI’s GPT-4, Google’s Bard, and Microsoft’s Bing Chat. The key features of this architecture include its ‘feed-forward’ structure, ‘self-attention’ mechanism, and ‘positional encodings’ (2). The efficacy of the transformer architecture in both natural language processing and general reasoning ability has led many to believe that the achievement of Artificial General Intelligence (AGI) is no longer fanciful (3). By training these models on billions of text and image data, they learn compressed representations of the real-world, as well as the processes and situations that produced the data. LLMs represent words and concepts (‘tokens’) learned from training data with higher dimensional vectors. These vectors capture and encode the semantic properties of words and images.  Jean-Guy showed us analyses of pictures and texts from Magritte and Peirce as examples of how to form a numeric grid out of ‘features’, defined by a human ‘intentional agent’. The LLM models phenomena without human intentional input.

The Bitter Lesson

In ‘The Bitter lesson’, an article by Rich Sutton (4), it is argued that human analytic categories can impede computation in natural language processing (NLP). The success of letting LLMs do their own unsupervised learning, without pre-programmed theoretical structures, undermines human theoretical endeavours. It seems that we may have been missing something obvious, hiding in plain sight, which the LLMs have found out on their own by parsing billions of unstructured text data. They now appear much more fluent in ‘generative semantics’ (5) than we are. Stephen Wolfram (6), in his paper: ‘What is ChatGPT doing and how does it work’, comments: ‘There’s more structure in human language than we thought there was, and we didn’t know it was there.’ Judged by their success in understanding and producing natural language, LLMs are on their way to discover much of the hidden structure behind human ‘meanings’, although precisely what they have discovered remains a ‘black box’.

Projection and Relevance 

Jean-Guy recognizes a fundamental dilemma in the way we think about modeling, between interpretive and naturalist paradigms, which has resonated through the argument from ancient times. To what extent do we project our models onto the world, and to what extent are our models derived from it? 

It was fascinating to hear from Jean-Guy about the concept of ‘projection’ as developed by the Bourbaki group and in the later writing of C.S Peirce.  These authors suggested that concepts are projected onto the world. OpenAI’s Ilya Sutskever describes the billions of fragments of text used to train LLMs as ‘projecting’ onto the knowledge base of these systems, with language acting like a ‘lens’. This learning process is inverted by the LLM projecting answers back onto the world. The LLM is a gigantic model, which is then used to generate answers to questions put to the AI device, seemingly by seeking out ‘relevance’.

Relevance has always been about model building, and the word itself evokes appropriateness or the fitting nature of something in relation to a ‘whole’, that whole being a model. ‘Relevance Theory’ from Sperber and Wilson (7) posits that there is a relevance maximizing device working as a ‘black box’ in the brain which seeks out the simplest path to relevance for input streams. Para-linguistic phenomena such as shrugs or winks, are interpreted in ‘threads’ of relevance. The LLMs at present do not take multi-modal nonverbal input, which is important for achieving meaning in spontaneous human interaction. When questioned about the limitations of working with a single modality, Ilya Sutskever pointed out the LLM parses billions of text fragments to build up its knowledge base, and that these fragments, derive from the behavior and artifacts of human beings. In real life human text, multimodality is essential for ‘in person’ communication, and much of the ‘meaning’ of human interaction depends on implicature controlled by the architecture of speech melody (8), which may not involve words at all. As yet, the LLM has not mastered the interactive melody of the text, but that may not be the case for long, especially if future versions may be able to accept multimodal inputs, including video (9). We may get a ‘sweet talking machine’.

Medical Semiotics

One example of computable semiotics is in medicine.  The Fast Health Interoperability Resource, (FHIR) (10) has the biggest uptake in the Western world as a common currency for health care. Underpinning the FHIR is the REST protocol -Representational State Transfer – which provides a universal way of working the internet using APIs – Application Programming Interfaces. The similarly ubiquitous Jason Script Object Notation JSON works globally for on screening the results with the Hyper Text Transfer Protocol HTTP. 

REST, JSON HTTP and APIs are almost synonymous with the modern internet. The FHIR API enables participants to share access to healthcare ‘resources’, which are standardized into classes such as ‘Patient Details’, ‘Observations’, ‘Diseases’, and these are then further profiled to fit ‘use cases’ in medical practice. There has been a proliferation of ‘profiles’ for these resources. After twenty years of active and enthusiastic standards work, there is still no generally accepted way of achieving interoperability of clinical concepts and thus interoperate medical data between rival systems, despite FHIR. 

Jean-Guy’s ‘meaning barrier’ problem applies here. The body has no names, and the names and the naming are in the service of our sign systems. The demarcations between medical categories are in the final analysis projected on the biological continuum. Subtly different ontologies and a proliferation of ‘profiles’ for resources continues to bedevil interoperability between rival systems.  AI in the form of LLMs is ideally placed to help in this regard, since they do not depend on humans pre-programming in their theoretical paradigms, but instead derive their structures from the data. The advent of these models may facilitate free text in medical records and counter practice becoming entirely ‘template bound’, which has led to physician stress and functional inefficiency . The use of LLMs in medicine is only just beginning. The big companies and national and international agencies are playing out a battle for control of the means of production in a trillion-dollar industry, especially now that the LLM genie is out of the bag. 


My overall reaction to Jean-Guy’s eloquent exposition is one of gratitude for the guidance through the jungle of the history of semiotics and the different approaches taken to make the subject computable. Semiotics has now come of age. The LLM techniques now have enough ‘parameters and background information to spontaneously jump to ‘emergent levels’ such as syntax and generative semantics to get ever better at their task of relevant text construction. Hopefully we can control it and recruit the extraordinary abilities of computable semiotics and AI to better manage the planetary resources. 

Michael Mair

Timaru New Zealand


“M-theory posits the universe consists of multiple dimensions – up to eleven, according to some formulations. However, humans perceive only three spatial dimensions plus time because the additional dimensions are “compactified” or too small to be observable, or because we exist on a 3-brane (a subset of the larger dimensions)” I also asked Stephen if dimensions variables and parameters were synonyms in modelling work and he ‘yes and you can have as many as you like’…. 


  1. ‘Attention is all you need’ from Vasani et al, Google

  1. ibid
  2. Ilya Sutskever Chief Scientist at OpenAI
  1. The Bitter Lesson by Rich Sutton

  1. Toward Generative Semantics, George Lakoff
  2. StephenWolfram
  1. Relevance Theory
  1. ‘Commentary on sense making in the human organism’  MW Mair
  1. StephenWolfram
  1. Emotional Intelligence in an LLM
  2. Stephen Wolfram

Medical empathy mimicked:

Be the first to comment

Leave a Reply

Your email address will not be published.