EPTCS 128
Proceedings
Machines, Computations and Universality 2013
Zürich, Switzerland, 9/09/2013  11/09/2013
Edited by: Turlough Neary and Matthew Cook
This volume contains the papers presented at the 6th conference on Machines, Computations and Universality (MCU 2013). MCU 2013 was held in Zürich, Switzerland, September 911, 2013. The MCU series began under the guise Machines et Calculs Universels/Universal Machines and Computations and was organised by Maurice Margenstern in Paris in 1995. Subsequent editions of the conference were held at Metz, France (MCU 1998); Chişinău, Moldova (MCU 2001); SaintPetersburg, Russia (MCU 2004); and Orléans, France (MCU 2007).
Since its inception MCU has been concerned with gaining a deeper understanding of computation through the study of models of general purpose computation. This volume continues in this tradition and includes new simple universal models of computation, and other results that clarify the relationships between models.
In addition to the contributed papers and invited talks, the program for MCU 2013 included two new features designed to help spread new ideas and foster collaborative efforts: informal presentations and open problem sessions. The informal presentation track allowed researchers to present recent work on topics of relevance for the MCU community. The open problem sessions gave participants the opportunity to share ideas for new research directions and to initiate research collaborations.
We are grateful to the EPTCS staff in particular Rob van Glabbeek for all their help in preparing this volume. We thank all the authors who submitted to MCU 2013, the invited speakers Liesbeth De Mol, Jarkko Kari, Kenichi Morita, Erik Winfree and Damien Woods, the program committee for their sterling work and insightful comments, and the referees who assisted in the evaluation of papers. We would like to thank the MCU steering committee with a special thank you to Maurice Margenstern and Jérôme DurandLose for their invaluable advice. Finally we would like to thank our sponsors with a special mention to Rodney Douglas, and also the administrative staff at the Institute of Neuroinformatics for their help in organising this event, in particular Kathrin Aguilar RuizHofacker, David Lawrence, Simone Schumacher and Frank Hitzemann who were always ready to provide advice and assistance.
Turlough Neary
Matthew Cook
Zürich, September 2013
Program committee
Andrew Adamatzky  University of the West of England, UK 
Matthew Cook  University of Zürich and ETH Zürich (cochair) 
Erzsérbet CsuhajVarjú  Eötvös Loránd University, Hungary 
Jérôme DurandLose  University of Orléans, France 
Rudolf Freund  University of Vienna, Austria 
Gabriel Istrate  Institute eAustria, Romania 
Jarkko Kari  University of Turku, Finland 
Lila Kari  University of Western Ontario, Canada 
Kamala Krithivasan  Indian Institute of Technology, India 
Maurice Margenstern  University of Lorraine, France 
Turlough Neary  University of Zürich and ETH Zürich (cochair) 
Matthew Patitz  University of Arkansas, USA 
Igor Potapov  University of Liverpool, UK 
Klaus Sutner  Carnegie Mellon University, USA 
Sergey Verlan  University of Paris Est, France 
Damien Woods  California Institute of Technology, USA 
Referees
Chaim GoodmanStrauss Jacob Hendricks PierreEtienne Meunier Amir Simjour
Organising committee
Matthew Cook  University of Zürich and ETH Zürich 
Turlough Neary  University of Zürich and ETH Zürich 
Steering committee
Maurice Margenstern  University of Lorraine, France (chair) 
Jérôme DurandLose  University of Orléans, France (vicechair) 
Erzsérbet CsuhajVarjú  Eötvös Loránd University, Hungary 
Natasha Jonoska  University of South Florida, USA 
Kenichi Morita  Hiroshima University, Japan 
Gheorghe Păun  The Romanian Academy, Romania 
Arto Salomaa  University of Turku, Finland 
K. G. Subramanian  University of Science, Malaysia 
Sponsoring institutions
Institute of Neuroinformatics (University of Zürich and ETH Zürich)
Swiss National Science Foundation
University of Zürich
ETH Zürich
Explorations in computerassisted computer science
Liesbeth De Mol
Centre for Logic and Philosophy of Science
Ghent University
elizabeth [dot] demol [at] ugent [dot] be
The aim of this talk is to reflect on the relations, frictions and interactions between theory and (computer) experiments in the context of theoretical computer science. I will draw inspiration from the historiography and philosophy of mathematics where one can observe an increasing interest in what one could call "explorative mathematics". This will be connected to the history of computer science where, as I will argue, one can find a similar school of thought throughout its short history. By way of a case study, I will present some old and new theoretical and experimental results on tag systems to illustrate the difficulties and opportunities involved with "explorative computer science".
Universal Pattern Generation by Cellular Automata
Jarkko Kari
Department of Mathematics and Statistics
University of Turku, Finland
jkari [at] utu [dot] fi
A onedimensional cellular automaton is a discrete dynamical system
where a sequence of symbols evolves synchronously
according to a local update rule. We discuss simple update rules that make
the automaton perform multiplications
of numbers by a constant. If the constant and the number base are selected
suitably the automaton becomes a universal pattern generator:
all finite strings over its state alphabet appear from a finite seed. In particular we consider
the automata that multiply by constants 3 and 3/2 in base 6. We discuss
the connections of these automata to some difficult open questions in number theory,
and we pose several further questions concerning pattern generation in cellular automata.
References

Jarkko Kari (2012):
Universal pattern generation by cellular automata.
Theoretical Computer Science 429, pp. 180 – 184
doi:10.1016/j.tcs.2011.12.037.
Ludicrously fast universal computation and construction with reconfigurable molecular robots
Erik Winfree
California Institute of Technology
Pasadena, California
winfree [at] caltech [dot] edu
I will review recent theoretical results in the 'nubots' model for
reconfigurable molecular robots, which combines features of
asynchronous cellular automata and Lsystems. Like cellular automata,
units contain finite state, are arranged on a twodimensional lattice,
and take actions in parallel based on their local neighborhood. Like
Lsystems, it is possible for units to duplicate (akin to cell
division) and therefore the number of units can grow exponentially
with time. The key innovation is a movement rule, inspired by
molecular motor proteins, that allows one unit to move relative to
another  and potentially pull with it a collection of other units to
which it has formed bonds. Thus, the model allows longrange
signaling via mechanical motion of large bonded groups of units. This
is asymptotically unrealistic since reasonable mechanical signals
shouldn't travel faster than the speed of sound, much as the standard
boolean circuit model is asymptotically unrealistic since it allows
electronic signals to travel faster than light. Nonetheless, the
nubots model provides insight into the complexity of 'biological'
tasks such as the growth, from a single unit, of a large and complex
object. A central result is that a wide class of
algorithmicallydescribed shapes can be grown in time polylogarithmic
in the number of units in the final shape. This is exponentially
faster than would be possible in cellular automata models. Joint work
with Damien Woods, HoLin Chen, Scott Goodfriend, Nadine Dabby, and
Peng Yin.
arXiv:1301.2626 [cs.DS]
(An Outline Of) A FaultTolerant Turing Machine
Ilir Çapuni
Department of Computer Engineering
Epoka University
Tirana, Albania
ilir [at] bu [dot] edu
We consider computations of a Turing machine under
noise that causes violations of the transition function that we call
faults.
In this paper we review the construction of a Turing machine given in ([
1])
that can simulate any other faultfree machine even when it is subjected to noise that causes
bursts of faults of size $ \leq \beta$
that are at least $O(\beta^2)$ apart from each other.
Further, we outline how this construction can be used as a building block for
a machine that can simulate any other machine when it is subjected to
a noise that causes faults that occur independently of each other
with some small probability, as presented in ([
2]) .
This paper is an exposition of a part of a joint work with Peter Gacs.
Isolated bursts of faults
We start by giving a brief overview of a machine $ M_1 $ that simulates machine $ M_2 $, and
can withstand isolated bursts of faults.
Each tape cell of the simulated machine $ M_{2} $ will be represented by a block of
size $ Q $ of the simulating machine $ M_{1} $ called a colony .
Each step of $ M_{2} $ will be simulated by a computation of $ M_{1} $ called
a work period.
During this time, the head of $ M_{1} $ makes a number of sweeps over the
current colony, decodes the represented cell symbol and state,
computes the new state, and transfers the necessary information to the
neighbor colony that corresponds to the new position of the head of $ M_{2} $.
In order to protect information from the propagation of errors,
the tape of $ M_{1} $ is subdivided into tracks: each track corresponds to a
field of a cell symbol of $ M_{1} $ viewed as a data record.
Each stage of computation will be repeated three times.
The results will be stored in separate tracks, and a final cellbycell majority vote
will recover the result of the work period from them.
All this organization is controlled by a few key fields, and
the technically most challenging part of the construction is the protection of this
control information from bursts.
For example, a burst can reverse the head in the middle of a sweep.
Our goal is that such structural disruptions be discovered locally, so
we cannot allow the head to go far from the place where it was turned back.
Therefore the head's movement will not be straight even during a single
sweep: it will make frequent short switchbacks (zigzags).
This will trigger alarm, and the start of a recovery procedure if for example
a turnback is detected.
It is a significant challenge that the recovery procedure
itself can be interrupted (or started) by a burst.
Hierarchical construction
In order to build a machine that can resist faults
occuring independently of each other with some small probability,
we take the approach suggested in ([
4]),
and implemented in ([
3])
for the case of onedimensional cellular automata.
We will build a
hierarchy of simulations:
machine $ M_1 $ simulates machine $ M_2 $ which
simulates machine $ M_3 $, and so on.
For simplicity we assume all these machines have the same program,
and all simulations have the same blocksize.
One cell of machine $ M_3 $ is simulated by one colony of machine $ M_2 $.
Correspondingly, one cell of $ M_2 $ is simulated by
one colony of machine $ M_1 $.
So one cell of $ M_3 $ is simulated by $ Q^2 $ cells of $ M_1 $.
Further, one step of machine $ M_3 $ is simulated by one
work period of $ M_2 $ of, say, $ O(Q^{2}) $ steps.
One step of $ M_2 $ is simulated by one work period of $ M_1 $,
so one step of $ M_3 $ is simulated by $ O(Q^{4}) $ steps of $ M_1 $.
Per construction, machine $ M_{1} $ can withstand
bursts of faults whose size is $ \le \beta $ for some constant parameter $ \beta $, that
are separated by some $ O(Q^{2}) $ faultfree steps.
Machines $ M_2 $, $ M_3 $, $ \dots $ have the same program, so it
would be natural to expect that machine
$ M_1 $ can withstand also some additional , larger bursts
of size $ \le \beta Q $ if those are separated by, say, at least $ O(Q^{4}) $ steps.
But a new obstacle arises.
On the first level, damage caused by a big burst spans several colonies.
The repair mechanism of machine $ M_1 $ outlined in the previous section above
is too local to recover from such extensive damage.
This cannot be allowed, since then the whole hierarchy would stop working.
So we add a new mechanism to $ M_{1} $ that, more modestly,
will just try to restore a large enough portion of the
tape, so it can go on with the simulation of $ M_2 $, even if all
original information was lost.
For this, $ M_{1} $ may need to rewrite an area as large as a few colonies.
This will enable the lowlevel recovery procedure of
machine $ M_{2} $ to restore eventually a higherlevel order.
A tricky issue is "forced selfsimulation": while we are constructing machine $ M_1 $
we want to give it the feature that it will simulate a machine $ M_{2} $ that
works just like $ M_{1} $.
The "forced" feature means that this simulation should
work without any written program (that could be corrupted).
This will be achieved by
a construction similar to the proof of the Kleene's fixedpoint
theorem.
We first fix a (simple) programming language to express the transition
function of a Turing machine.
Then, we write an interpreter for it in this same language.
The program of the transition function of $ M_{2} $
in this language, is a string that will be
"hardwired" into the transition function of $ M_{1} $,
so that $ M_{1} $, at the start of each work period, can write
it on a working track of the base colony.
Then the work period will interpret it,
applying it to the data found there, resulting
in the simulation of $ M_{2} $.
In this way, an infinite sequence of simulations arises, in order
to withstand larger and larger but sparser and sparser bursts of faults.
It can be proven that with the appropriate choice of parameters, with probability 1,
eventually nothing is left over from the set of the faulty times, if we first remove bursts of size
$\beta_1$ that are $V_1$ apart from each other, and then continue with
the removal of bursts of size $\beta_2$ that are $V_2$ apart from each other, and so on.
Since the $ M_1 $ uses the universal interpreter, which in turns
simulates the same program, it is natural to ask
how machine $ M_1 $ simulates a given Turing machine $ G $ that does the
actual useful computation?
For this task, we set aside a separate track
on each machine $ M_i $, on which some arbitrary other Turing machine can be
simulated.
References

Ilir Capuni & Peter Gacs: A Turing Machine Resisting Isolated Bursts Of Faults. Chicago Journal of Theoretical
Computer Science 2013,
doi:
10.4086/cjtcs.2013.003

Ilir Capuni (2012): A Faulttolerant Turing Machine. Ph.D. thesis, Boston University, Commonwealth avenue,
Boston MA 02215.
 Peter Gacs (2001): Reliable Cellular Automata with SelfOrganization. Journal of Statistical Physics 103(1/2),
pp. 45267, doi:10.1023/A:1004823720305 See also arXiv:math/0003117 [math.PR] and the proceedings of STOC '97.
 G. L. Kurdyumov (1978): An Example of a Nonergodic Homogenous OneDimensional Random Medium with
Positive Transition Probabilities. Soviet Mathematical Doklady 19(1), pp. 211214.
Formal Language Questions for Eulerian Trails
Henning Fernau Abteilung Informatikwissenschaften, Fachbereich IV Universität Trier, Germany fernau@unitrier.de 
Meenakshi Paramasivan Abteilung Informatikwissenschaften, Fachbereich IV Universität Trier, Germany meena_maths@yahoo.com 
We consider several formal
languages associated to multigraphs that allow for Eulerian trails.
We place these languages into the Chomsky hierarchy and show several
normal form properties.
1 Introduction
We consider undirected
multigraphs. A walk is an alternating sequence of vertices and
edges, beginning and ending with a vertex, where the vertices that
precede and follow an edge are the end vertices of that edge. A walk
is closed if its first and last vertices are the same, and
open if they are different. A trail is a walk in which
all the edges are distinct. Closed trails are called tours. A
trail is Eulerian if it uses all edges precisely once.
Multigraphs admit a Eulerian trail if and only if either every vertex
has even degree (in that case, a Eulerian tour is admitted) or all
but two vertices have even degree (in that case, the two vertices of
odd degree are the endvertices of that trail).
If we draw all vertices
of a multigraph G = (V, E) on a horizontal line, then we can
associate different integers to vertices by a mapping φ : V → 𝓩
called pseudolinear drawing such that u is to the left of v
in the drawing iff φ (u) < φ (v). If G = (V, E) admits a
Eulerian trail, we can describe the edge set of G by a sequence of
drawing commands like → ^{n}, meaning that we have to
connect the current vertex u to the vertex v satisfying φ (v) = φ
(u) + n, with v becoming then the current vertex, i.e., describing a
right move by n steps, or ← ^{n} , describing a left move
by n steps, Hence, G, φ and a startpoint x ∈ V on the trail
uniquely specify a word w(G, φ , x) over the alphabet 𝛴 = {→, ←,
}, called Euler trace. Conversely, every word in ET = 𝛴^{+}
\ ({}𝛴^{∗}) describes a multigraph G_{w} that
admits a Eulerian trail. For instance, w =→  →  ← 
describes a triangle, i.e., G_{w} = C_{3} . Concepts
similar to ET have been studied in [1, 2]. Obviously, ET ⊆ 𝛴^{∗}
is a regular language. Notice that, as φ (x) is not specified with
w ∈ ET, there are infinitely many pseudolinear drawings φ such
that w = w(G, φ , x), but all are obtained from one another by shifting
along the horizontal line and hence describe the same multigraph. For
the standard pseudolinear drawing φ_{w} of w
∈ ET , we assume that the start vertex x of the Eulerian trail of
the specified multigraph G_{w} obeys φ_{w} (x) = 0.
We will be interested in certain subsets of ET that describe
interesting properties of Euler traces.
2 Normal forms
The word w =→ 
describes a (multi)graph on two vertices x, y with one edge xy.
Arbitrarily setting φ (x) = 0, φ (y) = 2 is
obtained. However, there is no vertex z with φ (z) = 1. We will call
any integer n a hole for φ if there exist vertices u, v with
φ (u) < n < φ (v) but no vertex z with φ (z) = n. So, in our
example, 1 is a hole. It is obvious that every multigraph with a
Eulerian trail has a pseudolinear drawing without any holes. As
having holes or not is invariant under shifts, it makes sense to call
w ∈ ET holefree if the standard pseudolinear drawing φ_{w}
does not contain any holes. Define ET_{holefree}
= {w ∈ ET : w is holefree}. From our description, it follows that
every multigraph G with a Eulerian trail can be described by some w ∈
ET_{holefree}.
Theorem
1. There is a polynomialtime algorithm that, given w ∈ ET,
computes some w' ∈ ET_{holefree }with G_{w} = G_{w'}
.
Theorem 2.
ET \ ET_{holefree} is a nonregular language that can be
described by some onecounter automaton, while ET_{holefree }is
not contextfree.
For the first
part, notice that a onecounter automaton can guess the hole position
and then verify its guess. For the second part, consider L =
ET_{holefree} ∩ ({→ }{}^{+} {←}{}+ {→ }^{+}
). A rational transduction can transfer {a^{m} b^{m}
c^{m} : m ≥ 1} to L = {→ ^{n} ← ^{m}
(→ )^{l} : n > 1 ∧ m ∈ {n  1, n, n + 1} ∧ l
∈{m  1, m, m + 1}}, which shows that L is not contextfree.
Another
observation is that any multigraph G with a Eulerian trail has a
pseudolinear drawing φ that never puts any vertex to the left of
the start vertex x. Also this property is invariant under shifting,
so that it is also a property of words w ∈ ET (via φ_{w}
). This defines the languages ET_{leftstart} and
ET_{holefree,leftstart}, where the latter collects all
holefree words from ET_{leftstart}.
Theorem 3.
There is a polynomialtime algorithm that, given w ∈ ET, computes
some w' ∈ ET_{holefree,leftstart }with G_{w} =
G_{w'} .
Theorem
4. Both ET_{leftstart} and ET \ ET_{leftstart}
are contextfree (in fact, onecounter languages). Neither
ET_{holefree,leftstart} nor ET \ ET_{holefree,leftstart}
are contextfree.
3 Euler
tours
We can
likewise define the set ET^{ ∘} of descriptions of
multigraphs admitting some Euler tour. With the transduction 𝜏 that
replaces any sequence of  following → (until the end of word or
some ←) by ↑ and the remaining  by ↓, w ∈ ET^{ ∘}
iff w ∈ ET and 𝜏(w) contains as many ↑ as ↓ symbols.
Theorem 5.
ET^{ ∘} and ET \ ET^{ ∘} are contextfree (in
fact, onecounter languages).
Although the
mentioned normal forms also exist for Euler tours, and they can be
produced in polynomialtime, none of the corresponding three
languages nor their complements are contextfree.
4
Conclusions
We have
started looking into the complexity of Eulerian multigraphs from a
formal language perspective. This research direction allows for many
similar questions, for instance, classifying ET_{graph }(not
contextfree, but the complement is), ET_{loopfree}
(regular) or also ET_{2regular} = {w ∈ ET : ∀v ∈ V
(G_{w} ) : d(v) = 2}. Notice that ET \ ET_{2regular}
is a onecounter language, while ET_{2regular} ∩ {→
}^{+} ⋅ {←} ⋅ {}^{+} ⋅ {→} ⋅ {}^{+}
⋅ {←} ⋅ {}^{+} = {(→ )^{n} ← ^{m}
→ ^{l} ← ^{k} : n > 0, m > n, l> m, k
+ m = n + l } is not contextfree.
References
[1] L.
Jeganathan, K. Krithivasan & R. Rama (2010): Graph Splicing
Systems. In Z. Majkic, M. Sipser, R. Radha & D. Wei, editors:
International Conference on Theoretical and Mathematical Foundations
of Computer Science, TMFCS 2008, Curran Associates, Inc., pp. 7279.
[2] M.
Paramasivan & N. G. David (2011): Shuffle Operations on Euler
Graphs. Mapana Journal of Sciences 10(1), pp. 6378. Available at
http://journals.christuniversity.in/index.php/mapana/issue/view/4.
Array Automata on Cayley Grids
Rudolf Freund Technische Universität Wien Institut für Computersprachen Favoritenstr. 9, A1040 Wien, Austria rudi [at] emcc [dot] at 
Marion Oswald Technische Universität Wien Institut für Computersprachen Favoritenstr. 9, A1040 Wien, Austria marion [at] emcc [dot] at 
As a natural extension of string languages (see
[6]), arrays on the $d$dimensional grid $\mathbb{Z}^{d}$ have been
introduced, and the correspondence between array grammars generating and array automata accepting
$k$connected arrays have been investigated. For any $d\geq 1$, a $d$dimensional array $\mathcal{A}$ over an alphabet $V$ is
a function $\mathcal{A}:\mathbb{Z}^{d}\rightarrow V\cup \left\{ \#\right\}$, where $shape\left( \mathcal{A}\right) =\left\{ v\in \mathbb{Z}^{d}\mid
\mathcal{A}\left( v\right) \neq \#\right\}$ is finite and $\#\notin V$ is
called the blank symbol. $V^{\ast d}$ denotes the set of all $d$dimensional arrays over $V$. For any array $\mathcal{A}\in V^{\ast
d}$ and any $v\in \mathbb{Z}^{d}$ we define $\tau _{v}\left( \mathcal{A} \right) $ to be the $d$dimensional array translated by $v$, i.e., $%
\left( \tau _{v}\left( \mathcal{A}\right) \right) \left( w\right) =\mathcal{A%
}\left( wv\right) $ for all $w\in \mathbb{Z}^{d}$. Usually (e.g., see
[2, 5, 7]) arrays are regarded as equivalence classes of arrays with respect to linear translations, i. e., only the
relative positions of the symbols different from $\#$ in $\mathbb{Z}^{d}$ are taken into account: the equivalence class $\left[ \mathcal{A}\right] $
of an~array $\mathcal{A}\in V^{\ast d}$ is defined by $\left[ \mathcal{A} \right] =\{\mathcal{B}\in V^{\ast d}\mid \mathcal{B}=\tau _{v}\left(
\mathcal{A}\right) $ for some $v\in \mathbb{Z}^{d}\}$. The set of all
equivalence classes of $d$dimensional arrays over $V$ with respect
to linear translations is denoted by $\left[ V^{\ast d}\right]$.
Following the ideas of E. CsuhajVarjú and V. Mitrana (
[3]), we started to investigate arrays on Cayley grids of finitely generated groups. For a group $G$ generated by the elements $e_{1}$,...,$e_{m}$, we consider the corresponding Cayley graph $C\left( G\right)$ and arrays over an alphabet $V$ on $C\left( G\right)$ with the nodes of $C\left( G\right)$ getting assigned elements of $V\cup \left\{ \#\right\} $. In that sense, with $N\left( C\left( G\right) \right) $ denoting the set of nodesof $C\left( G\right)$, a finite array on $C\left( G\right)$ is a function $\mathcal{A}:N\left( C\left( G\right) \right) \rightarrow V\cup \left\{ \#\right\}$
with $shape\left( \mathcal{A}\right) =\left\{ v\in N\left( C\left( G\right) \right) \mid \mathcal{A} \left( v\right) \neq \#\right\}$ being finite. $V^{\ast }\left\langle C\left(G\right)\right\rangle$ ($\left[ V^{\ast }\left\langle C\left(G\right) \right\rangle \right])$ denotes the set of (equivalence classes) of arrays over $V$ on $C\left( G\right)$. An array $\mathcal{A\in }V^{\ast}\left\langle C\left( G\right) \right\rangle $ is called $k$connected if for any two elements $v$ and $w$ in $shape\left( \mathcal{A}\right)$ there is a path $v=w_{1}...w_{n}=w$ in $C\left( G\right)$ with $\left\{w_{1},...,w_{n}\right\}\subseteq shape\left(\mathcal{A}\right)$ such that for the distance $d\left( w_{i},w_{i1}\right)$ between $w_{i}$ and $w_{i1}$ we have $d\left( w_{i},w_{i1}\right) \leq k$ for all $1<i\leq n$.
$\mathbb{Z}$ is a special case of an Abelian group generated by $\left(1\right)$ and its inverse $\left(1\right)$; $\mathbb{Z}^{d}$ is an Abelian group generated by the unit vectors $\left( 0,...,1,...0\right)$ and their inverses. It is well known that every finitely generated Abelian group is a direct sum of a torsion group and a free Abelian group where the torsion group may be written as a direct sum of finitely many groups of the form $\mathbb{Z}/p^{k}\mathbb{Z}$ for $p$ being a prime, and the free Abelian group is a direct sum of finitely many copies of $\mathbb{Z}$. As a first example of arrays on a Cayley grid of a nonAbelian group, arrays on the hexagonal grid were considered (
[1]).
A derivation step in an array grammar generating a language of arrays on the Cayley grid $C\left( G\right) $ consists of replacing a finite subpattern by
another subpattern on $C\left( G\right) $ of the same shape. Let $\mathcal{L}_{gen}\left( C\left( G\right),ARB\right)$ denote the family of array
languages of $k$connected arrays on $C\left( G\right)$ generated by array grammars with arbitrary rules, and let $ENUM\left(C\left(G\right)\right)$
be the family of recursively enumerable array languages of $k$connected arrays on $C\left( G\right) $ (i.e., the arrays are encoded as strings by
using a suitable encoding for the positions in $C\left( G\right)$).
On the other hand, array automata accepting $k$connected arrays on $C\left(G\right)$ can be defined by generalizing Turing machines working on a
onedimensional tape for accepting strings (e.g., see
[4]). Like in the string case, we may consider deterministic and nondeterministic variants of array automata; the families of array languages of arrays on $C\left( G\right)$ accepted by (non)deterministic array automata are denoted by $\mathcal{L}_{acc}\left(C\left( G\right) ,ATM\right) $ and $\mathcal{L}_{acc}\left( C\left( G\right),detATM\right) $, respectively.
Due to the characterization of finitely generated Abelian groups, we may state the following result:
Theorem 1 For any finitely generated Abelian group,
\begin{equation*}
\begin{array}[b]{lllll}
\mathcal{L}_{gen}\left( C\left( G\right) ,ARB\right) & = & ENUM\left(
C\left( G\right) \right) & & \\
& = & \mathcal{L}_{acc}\left( C\left( G\right) ,ATM\right) & = & \mathcal{L}
_{acc}\left( C\left( G\right) ,detATM\right).
\end{array}
\end{equation*}
For nonAbelian groups, at least the equalities
\begin{equation*}
\begin{array}[b]{lll}
\mathcal{L}_{gen}\left( C\left( G\right) ,ARB\right) & = & ENUM\left(
C\left( G\right) \right) \\
& = & \mathcal{L}_{acc}\left( C\left( G\right) ,ATM\right)
\end{array}
\end{equation*}
hold as long as the conditions restricting the free group are (Turing)
decidable and all paths in the Cayley graph $C(G)$ are (Turing)
computable.
There are interesting subclasses of nonAbelian groups $G$ where each
element has finite order. As a special variant, there may even exist a
special integer $n$ such that $x^{n}=e$ for every $x$ in $G$, with $e$ being
the neutral element of $G$ and $n$ being called the exponent of $G$. As we
know from algebra, there are infinite groups having this property, e.g., some of
the famous Burnside groups $B\left( m,n\right) $: the free Burnside group of
rank $m$ and exponent $n$, denoted by $B\left( m,n\right) $, is the maximal
group with $m$ distinguished generators $x_{1},...,x_{m}$ and $x^{n}=1$ for
each $x\in $ $\left\{ x_{1},...,x_{m}\right\} $. For specific $m$ and $n$,
$B\left( m,n\right) $ is infinite. For such an infinite group, finitely
generated, with exponent $n$, in contrast to the string case or the usual
case of $\mathbb{Z}^{d}$ or even the hexagonal grid, no infinite line within
the Cayley grid can be determined, hence, it seems to be impossible to get
arbitrary workspace for intermediate computations in a deterministic way.
Following this informal argument, we cojecture that the deterministic
variant of array automata working on Cayley graphs of such groups has weaker
accepting power than the nondeterministic variant:
Conjecture 2 For any infinite free Burnside group $B\left( m,n\right) $,
\begin{equation*}
\begin{array}[b]{lllll}
\mathcal{L}_{gen}\left( C\left( B\left( m,n\right) \right) ,ARB\right) & =
& ENUM\left( C\left( B\left( m,n\right) \right) \right) & & \\
& = & \mathcal{L}_{acc}\left( C\left( B\left( m,n\right) \right) ,ATM\right)
& \supset & \mathcal{L}_{acc}\left( C\left( B\left( m,n\right) \right)
,detATM\right).
\end{array}
\end{equation*}
It is even not clear if there exists a free Burnside group $B\left(
m,n\right) $ such that $\mathcal{L}_{acc}\left( C\left( B\left( m,n\right)
\right) ,detATM\right) $ contains nonrecursive array languages.
References
[1] K. Aizawa & A. Nakamura (1989): Grammars on the hexagonal array. In P. S.P. Wang, editor: Array
Grammars, Patterns and Recognizers, Series in Computer Science 18, World Scientific Publ., pp. 469–477,
doi:10.1142/9789814368308_0012.
[2] C. R. Cook & P. S.P. Wang (1978): A Chomsky hierarchy of isotonic array grammars and languages.
Computer Graphics and Image Processing 8, pp. 144–152, doi:10.1016/S0146664X(78)800227.
[3] E. CsuhajVarjú & V. Mitrana: Array grammars on Cyley Grids. Private communication.
[4] K. Krithivasan & R. Siromoney (1974): Array automata and operations on array languages. Computer
Graphics and Image Processing 4(1), pp. 3–30, doi:10.1080/00207167408803078.
[5] A. Rosenfeld, editor (1979): Picture Languages. Academic Press, Reading, MA.
[6] G. Rozenberg & A. Salomaa, editors (1997): Handbook of Formal Languages (3 volumes). Springer,
Berlin.
[7] P. S.P. Wang (1980): Some new results on isotonic array grammars. Information Processing Letters 10,
pp. 129–131, doi:10.1016/00200190(80)900630.
Fractal Dimension of Spacetime Diagrams and the Runtime
Complexity of Small Turing Machines
Joost J. Joosten Universitat de Barcelona Barcelona, Spain jjoosten [at] ub [dot] edu 
Fernando SolerToscano Universidad de Sevilla Sevilla, Spain fsoler [at] us [dot] es 
Hector Zenil Karolinska Institute Stockholm, Sweden hectorz [at] labores [dot] eu 
Complexity measures are designed to capture complex behaviour and to quantify how complex that particular behaviour is. If a certain phenomenon is genuinely complex this means that it does not all of a sudden becomes simple by just translating the phenomenon to a different setting or framework with a different complexity value. It is in this sense that we expect different complexity measures from possibly entirely different fields to be related to each other.
In this work we look at small oneway infinite Turing machines (TMs) with two and three states and a binary tape alphabet. For any particular such machine τ and any particular input x we consider the spacetime diagram of τ(x) as the collection of consecutive tape configurations of the computation τ(x). We look at the spatial representation σ_{0} of the memory when τ starts on input x. Next we look at σ_{1}: the spatial representation of the memory after one step in the computation and so forth (see Fig. 1).
Figure 1: Example of Spacetime diagrams for one of the 2 Busy Beavers in (3,2) on the first three inputs (TMs 599 063 and 666 364 in Wolfram's enumeration). Their computation has the largest runtime, space and boxes sequences.
It is on these spatial objects (see Fig. 1) and in particular the limit for x→∞ for which we compute or estimate the fractal dimension d(τ) defined as follows.
Definition 1 (Box dimension of a Turing machine)
Let τ be a TM that converges on infinitely many input
values x. In case τ(x)↓, let N(τ, x) denote the
number of black cells in the spacetime diagram of τ on input
x and let t(τ, x) denote the number of steps needed for
τ to halt on x. We will define the Box dimension of a TM τ and denote it by d(τ). In case t(τ,x) is constant from some x onwards, we define d(τ) : = 2. Otherwise, we define
d(τ) : = 
lim inf
x→ ∞


log(N(τ,x))
log(t(τ,x))

. 
It turns out that there is a very strong relation between the fractal
dimension of d(τ) and its runtime complexity. In particular we
found in small spaces that, a TM runs in at most linear time, if and
only if, its fractal dimension is 2, and its dimension is 1, if only
if, it runs in superpolynomial time and it uses polynomial space. We
prove an upper and a lower bounds for the dimension of Turing machines
and we verify experimentally (see Table 1) that
if a TM runs in time O(x^{n}), the corresponding dimension
is (n+1)/n. We also put forward an Upper Bound Conjecture to
the effect that the proven upper bound is actually always
attained. For special cases this can also be proved. Moreover, under
some additional assumptions this can also be proven in general. In the
experiment we test if in our testspace the additional assumptions
were also necessary ones and they turn out to be so.
Thus, we have a translation from the computational framework to the geometrical framework. Fractal dimension is clearly related to degrees of freedom and as such related to an amount of information storage. We find the results presented here notable because they relate two completely different complexity measures: the geometrical fractal dimension and the runtime complexity of a computation.
The paper forms part of a research programme where the authors have
exhaustively mined and thoroughly investigated the behavior of small
Turing machines related to different measures of complexity [1,2,3,4,5,6].
Boxes  Runtime  Space  Machines 
O(n^{3})  O(n^{2})  O(n)  3358 
O(n^{4})  O(n^{3})  O(n)  6 
o(Ρ)  o(Ρ)  o(Ρ)  14

Table 1: Distribution of TMs in the space 3 states and 2 symbols of which we had to compute the corresponding dimension over their complexity classes. o(Ρ) denotes the class of superpolynomials.
References
[1] JeanPaul Delahaye & Hector Zenil (2012): Numerical evaluation of algorithmic complexity for short strings: A glance into the innermost structure of randomness. Applied Mathematics and Computation 219(1), pp. 6377, doi:10.1016/j.amc.2011.10.006. Available at http://arxiv.org/abs/1101.4795.
[2] Joost J. Joosten, Fernando SolerToscano & Hector Zenil (2011): Programsize versus Time Complexity. Slowdown and Speedup Phenomena in the Microcosmos of Small Turing Machines. International Journal of Unconventional Computing 7(5), pp. 353387. Available at http://arxiv.org/abs/1102.5389v1.
[3] Joost J. Joosten, Hector Zenil & Fernando SolerToscano (2011): Fractal Dimension as an Indication of the Terminating Runtime of Discrete Dynamical Systems (abstract). In S. Thurner & M. Szell, editors: ECCS'11, Vienna, Austria, p. 214. Available at http://www.eccs2011.eu/eccs11bookofabstracts.pdf.
[4] Fernando SolerToscano, Hector Zenil, JeanPaul Delahaye & Nicolas Gauvrit (2012): Calculating
Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines. arXiv:1211.1302 [cs.IT]. Available at http://arxiv.org/abs/1211.1302.
[5] Fernando SolerToscano, Hector Zenil, JeanPaul Delahaye & Nicolas Gauvrit (2013): Correspondence and Independence of Numerical Evaluations of Algorithmic Information Measures. Computability 2, doi:10.3233/COM13019. Available at http://arxiv.org/abs/1211.4891.
[6] Hector Zenil, Fernando SolerToscano & Joost J. Joosten (2012): Empirical Encounters with Computational Irreducibility and Unpredictability. Minds and Machines 22(3), pp. 149165, doi:10.1007/s110230119262y. Available at http://arxiv.org/abs/1104.3421.
Causal sets for geometrical Gandy machines
and Gandy machines over multisets
Adam Obtułowicz
Institute of Mathematics, Polish Academy of Sciences
Śniadeckich 8, P.O.B. 21, 00956 Warsaw, Poland
email: A.Obtulowicz@impan.pl
A new approach to the computational complexity beyond the known complexity measures
of the consumed time and space of computation is proposed. The approach focuses
on the chaotic behaviour and randomness aspects of computational processes and
bases on a representation of these processes by causal sets. The abstract
systems allowing some synchronized parallelism of computation are investigated within
the approach, where the
computational processes realized in a discrete time
are represented by causal sets like in [2].
The representation of computational processes by causal sets is aimed to provide an
abstraction from those features of computational processes which have not a spatial
nature such that the abstraction could make visible some new aspects of the
processes like an aspect of chaotic behaviour or a fractal shape.
The aspects of a chaotic behaviour and a fractal shape inspired by the research area
of dynamics of nonlinear systems [11] regarding an unpredictability of the behaviour
of these systems¹ could suggest an answer to the following question
formulated in [13]: Is the concept of randomness, founded in the
concept of absence of computable regularities, the only adequate and consistent
one? In which direction, if any, should one look for alternatives?
The answers may have an impact on designing pseudorandom number generators
applied in statistics, cryptography, and Monte Carlo Method.
Thus the proposed approach comprises measuring of complexity of computational processes
by a use of graph dimensions [6] and network fractal dimensions [10] in parallel
to measuring complexity of random strings in [4] by Hausdorff dimension.
The proposed approach concerns investigations of abstract computing devices
of two types:
 geometrical Gandy machines,
 Gandy machines over multisets, where these machines are counterparts of
some uniform families of P systems (the underlying systems of membrane computing
[9]) like in [5].
The geometrical Gandy machines are some modifications of the known Gandy's mechanisms
[3] by assuming that the sets of machine instantaneous descriptions² are skeletal sets, similarly like
in [7], with respect to the permutations of urelements, where urelements are
ntuples of rational numbers. Hence the adjective “geometrical” is used.
It has been pointed out in [12] that Gandy mechanisms “conform to basic
principles of physics”, see also [1].
The geometrical Gandy machines are aimed to respect a claim (contained in the open
problem in [12]) for Gandy mechanisms to be “more consistent with local
causation in physics”. In other words, to be more realistic than (possible)
imaginary constructs within the theory of hereditarily finite sets and hence
less “technically somewhat complicated”.
The Gandy machines over multisets are geometrical Gandy machines, where the
urelements appearing in the machine instantaneous descriptions are in addition labelled by multiset and the
machine rules of local causation respect processing of multisets in the manner of
P system evolution rules. These machines are aimed to present some uniform families
of P system [5] in terms of a one machine with finite number of schemes of
evolution rules.
The assumption that the sets of machine instantaneous descriptions are skeletal together with the
features of machine local causation rules provide a natural construction of
causal sets representing computational processes. The causal sets representing computational processes are
here subsets of spacetime, i.e. (n+1)tuples of rational numbers if urelements
are ntuples (forming phase space), and the causality relations are determined by
the applications of machine local causation rules, respectively.
The examples of GandyPăunRozenberg machines, including generalized machines, in
[7] and [8] give rise to the examples of geometrical Gandy machines
after some slight modifications.
Some models of computation in [2] may give rise to examples of geometrical Gandy machines
whose computational processes are represented by causal sets of fractal shape.
References

P. Arrighi & G. Dowek (2011):
The physical ChurchTuring thesis and principles of quantum theory.
arXiv: 1102.1612.

T. Bolognesi (2010):
Causal sets from simple models of computation.
Int. Journal of Unconventional Computing 6, pp. 489–524.

R. Gandy (1980):
Church's thesis and principles for mechanisms.
In: The Kleene Symposium, NorthHolland, Amsterdam, pp. 123–148,
doi:10.1016/S0049237X(08)712576

J. H. Lutz (2003):
The dimensions of individual strings and sequences.
Information and Computation 187, pp. 49–79,
doi:10.1016/S08905401(03)001871

N. Murphy & D. Woods (2011):
The computational power of membrane systems under tight
uniformity condition.
Nat. Computing 10, pp. 613–632,
doi:10.1007/s1104701092447

T. Nowotny & M. Requardt (1998):
Dimension theory of graphs and networks.
J. Phys. A Math. Gen. 31, pp. 2447–2463,
doi:10.1088/03054470/31/10/018

A. Obtułowicz (2011):
Randomized Gandy–Păun–Rozenberg machines.
In: Membrane Computing, Lecture Notes in Computer Science 6501,
Springer, Berlin, pp. 305–324,
doi:10.1007/9783642181238_24

A. Obtułowicz (2012):
Generalized Gandy–Păun–Rozenberg machines for tile
systems and cellular automata.
In: Conference on Membrane Computing 2011,
Lecture Notes in Computer Science 7184,
Springer, Berlin, pp. 314–322,
doi:10.1007/9783642280245_21

G. Păun, G. Rozenberg & A. Salomaa (2009):
The Oxford Handbook of Membrane Computing.
Oxford Univ. Press, Oxford.

H. D. Rozenfeld, L. K. Gallos, Ch. Song & H. A. Makse (2009):
Fractal and transfractal scalefree networks.
In: Encyclopedia of Complexity and System
Science, Springer, Berlin, pp. 3924–3943,
doi:10.1007/9780387304403_231

S. H. Strogatz (1994):
Nonlinear Dynamics and Chaos.
Perseus Books Publ., LLC.

K. Sutner (1998):
Computation theory of cellular automata.
In: MFCS'98 Satellite Workshop on Cellular Automata,
Brno, Czech Republic, August 25–27, 1998.

S. B. Volchan (2002):
What is a random sequence.
Amer. Math. Monthly 109, pp. 46–63,
doi:10.2307/2695767
¹ unpredictability due to sensitive dependence on initial
conditions–an important feature of deterministic transient chaos [11]
often having fractal shape.
² a machine instantaneous description is
here a hereditarily finite set which describes potential ‘wireless’
intercommunication between urelements appearing in this set.
Machine Spaces: Axioms and Metrics
Jörg Zimmermann Institute of Computer Science University of Bonn, Germany jz [at] iai [dot] unibonn [dot] de 
Armin B. Cremers Institute of Computer Science University of Bonn, Germany abc [at] iai [dot] unibonn [dot] de 
1 Time Axioms
We use the axiomatic method in order to investigate the notion of an abstract machine and its implied
set of computable functions and complexity classes. Time is the driver of the computational processes, so we want to isolate the
abstract properties of time which are relevant from a computational point of view. Here we model time
as a totally ordered monoid, thus allowing discrete, continuous, and transfinite time structures.
All time elements can be compared and be combined by an associative
operation, denoted by ‘+’, which has a neutral element 0^{(t)}. Furthermore, the order
structure and the algebraic structure are compatible:
(Compatibility) ∀ t_{1}, t_{2}, t_{3}: t_{1} ≤ t_{2}
⇒ t_{1}+t_{3} ≤ t_{2}+t_{3} and t_{3}+t_{1} ≤
t_{3}+t_{2}.
Compatibility is required in both possible ways because the monoid operation has not to be commutative
(which is, for example, the case for the sum of ordinal numbers).
2 Machine Axioms
Machines operate on a state space Σ, about which we want to say nothing
specific here. Further we introduce the input space I, output space O, and
program space P. An initializer is a mapping init from P
× I to Σ
and an output operator is a mapping out from Σ to O. A machine wrt. a
time structure T and a state space Σ is a mapping M from
Σ × T to Σ, denoted by M_{t}(s).
Finally, there is a subset HALT of Σ. States in HALT will be used
to signal termination of a computation. Let TERM_{M}(s) denote the set of time
points t with M_{t}(s) ∈ HALT. Now we require the following axioms:
(Start) ∀ s ∈ Σ: M_{0(t)}(s) = s
(i.e., M_{0(t)} = id_{Σ}),
(Action) ∀ t_{1}, t_{2} ∈ T: M_{t1+t2}
= M_{t2} ∘ M_{t1}.
These two axioms state that the time monoid is operating on the state space via
machine M.
(Stop) ∀ s ∈ Σ, t_{1}, t_{2} ∈ T:
t_{1} ∈ TERM_{M}(s) and t_{1} ≤ t_{2} ⇒
M_{t1}(s) = M_{t2}(s).
That is, after reaching a termination state, nothing changes anymore, i.e.,
termination states are fixpoints of the machine dynamics.
(WellTermination) ∀ s ∈ Σ: TERM_{M}(s) ≠ ∅
⇒ ∃ t_{1} ∈ TERM_{M}(s) ∀ t_{2}
∈ TERM_{M}(s): t_{1} ≤ t_{2}.
Welltermination requires that if a machine terminates on s, i.e., reaches
HALT for some point
in time, then there is a first point in time when this happens, or, equivalently,
the set of time points taking s to HALT has a least element, if it is not
empty. If TERM_{M}(s) is nonempty and t^{*} its least element,
then define M(s) = out(M_{t*}(s)), otherwise M(s) is left undefined.
Definition 2.1. A function f: I → O is implemented by
p ∈ P on M iff f(x) = M(init(p, x)) for all x ∈ I.
Functions f which are implementable on a machine M are called "Mcomputable".
[p]_{M} denotes the (partial) function implemented by p on M.
3 A Metric on Machine Spaces
We now want to introduce a generalized metric on machine spaces which represents
some kind of distance between different machines by relating the resources
needed by the machines to solve the same problems. Here we focus on time
complexity as our resource measure:
Definition 3.1. time_{p}^{M}(x) = min(TERM_{M}(init(p,x))).
Definition 3.2. τ: T → T is an admissible time transfer
function (attf) from M_{1} to M_{2} iff τ is monotone and
∀ p_{1} ∈ P_{1} ∃ p_{2} ∈ P_{2}:
[p_{1}]_{M1} = [p_{2}]_{M2} and
∀ x ∈ I: time_{p2}^{M2}
(x) ≤
τ(time_{p1}^{M1}
(x)).
Two machines M_{1} and M_{2} are timecompatible if they operate on the same time
structure, input space and output space. Note that the state space and the program space have not to be the same.
We now define a generalized metric Δ^{(t)} which takes
two timecompatible machines and maps them on an element of ℘(T^{T}),
i.e., the distance is represented by a set of functions on the time structure T:
Definition 3.3. Δ^{(t)}(M_{1}, M_{2}) =
{τ  τ is an attf from M_{1} to M_{2}}.
We use the whole set of attfs, because there seems to be no obvious way to
single one out. Additionally, one can combine and compare sets of functions
much like single functions. However, future research may identify a canonical
element in the set of attfs, maybe by introducing additional axioms.
Definition 3.4. Let α, β ⊆ T^{T}. Then define their
composition as α ∘ β = {τ_{1} ∘ τ_{2}  τ_{1} ∈
α, τ_{2} ∈ β}.
Definition 3.5. α ≤ β iff ∀ τ_{2} ∈ β
∃ τ_{1} ∈ α: τ_{1} ≤ τ_{2}.
With these definitions the sets of attfs become a directedly ordered monoid (dom).
A directed order is a partial order where two elements always have an
upper bound, and directed monoids can be used as ranges for generalized
metrics, allowing many standard constructions of topology [2].
Our metric can be classified as a domvalued directed pseudometric,
satisfying the following triangle inequality:
Δ^{(t)}(M_{1}, M_{3}) ≤
Δ^{(t)}(M_{2}, M_{3}) ∘ Δ^{(t)}(M_{1}, M_{2}).
This triangle inequality enables the
notion of εballs in machine space, which, for example, can
be used to explore questions regarding model risk and robustness in statistics.
The proposed ideas and concepts are inspired by work started and continued in
[1], [4], [5], and [3].
References
[1] R. Gandy (1980): Church's Thesis and principles for mechanisms. In J. Barwise, H. J. Keisler & K. Kunen,
editors: The Kleene Symposium, NorthHolland, pp. 123148, doi:10.1016/S0049237X(08)712576.
[2] R. Kopperman (1988): All topologies come from generalized metrics. The American Mathematical Monthly
95(2), pp. 8997, doi:10.2307/2323060.
[3] T. Neary & D. Woods (2012): The Complexity of Small Universal Turing Machines: A Survey.
In: SOFSEM 2012, LNCS 7147, Springer, pp. 385405, doi:10.1007/9783642276606_32.
[4] M. B. PourEl & I. Richards (1982): Noncomputability in models of physical phenomena. International Journal
of Theoretical Physics 21(67), pp. 553555, doi:10.1007/BF02650184.
[5] W. Sieg (2008): Church Without Dogma: Axioms for Computability. In S. B. Cooper, B. Löwe &
A. Sorbi, editors: New Computational Paradigms, Springer, pp. 139152, doi:10.1007/9780387685465_7.