Theoretical Computer Science flagPL flagEN
Faculty of Mathematics and Computer Science
Jagiellonian University
    informatyka analityczna  
UJ coat of arms
Algorithmics Research Group abacus
Theoretical computer science Seminar
Wednesday: 16:15 - 18:00, room 0174
This is our main seminar. We study problems in graph theory, partial ordered sets, online algorithms, computational complexity.
This is also the place where we present our most recent results.
table edited by: Paweł Idziak
Paweł Zegartowski
Cache-Oblivious Hashing
The hash table, especially its external memory version, is one of the most important index structures in large databases. Assuming a truly random hash function, it is known that in a standard external hash table with block size b, searching for a particular key only takes expected average t_q=1+1/2^Ω(b) disk accesses for any load factor α bounded away from 1. However, such near-perfect performance is achieved only when b is known and the hash table is particularly tuned for working with such a blocking. In this paper we study if it is possible to build a cache-oblivious hash table that works well with any blocking. Such a hash table will automatically perform well across all levels of the memory hierarchy and does not need any hardware-specific tuning, an important feature in autonomous databases.

We first show that linear probing, a classical collision resolution strategy for hash tables, can be easily made cache-oblivious but it only achieves t_q=1+Θ(α/b) even if a truly random hash function is used. Then we demonstrate that the block probing algorithm (Pagh et al. in SIAM Rev. 53(3):547558, 2011) achieves t_q=1+1/2^Ω(b), thus matching the cache-aware bound, if the following two conditions hold: (a) b is a power of 2; and (b) every block starts at a memory address divisible by b. Note that the two conditions hold on a real machine, although they are not stated in the cacheoblivious model. Interestingly, we also show that neither condition is dispensable: if either of them is removed, the best obtainable bound is t_q=1+O(α/b), which is exactly what linear probing achieves.  

  • Rasmus Pagh, ZheweiWei, Ke Yi, Qin Zhang, Cache-Oblivious Hashing, Algorithmica (2014) 69:864-883
  • 03.06.2015,
    Łukasz Lachowski
    An Algorithmic Characterization of Polynomial Functions over Z_{p^n}
    In this paper we consider polynomial representability of functions defined over Z_{p^n} , where p is a prime and n is a positive integer. Our aim is to provide an algorithmic characterization that (i) answers the decision problem: to determine whether a given function over Z_{p^n} is polynomially representable or not, and (ii) finds the polynomial if it is polynomially representable. The previous characterizations given by Kempner (Trans. Am. Math. Soc. 22(2):240266, 1921) and Carlitz (Acta Arith. 9(1), 6778, 1964) are existential in nature and only lead to an exhaustive search method, i.e. algorithm with complexity exponential in size of the input. Our characterization leads to an algorithm whose running time is linear in size of input. We also extend our result to the multivariate case.  
  • Ashwin Guha, Ambedkar Dukkipati, An Algorithmic Characterization of Polynomial Functions over Z_{p^n}, Algorithmica (2015) 71:201-218
  • 27.05.2015,
    Lech Duraj
    Quadratic-time hardness of longest common subsequence
    Łukasz Majcher
    List Coloring in the Absence of a Linear Forest
    The k-COLORING problem is to decide whether a graph can be colored with at most k colors such that no two adjacent vertices receive the same color. The LIST k-COLORING problem requires in addition that every vertex u must receive a color from some given set L(u)⊆{1,...,k}. Let P_n denote the path on n vertices, and G+H and rH the disjoint union of two graphs G and H and r copies of H, respectively. For any two fixed integers k and r, we show that LIST k-COLORING can be solved in polynomial time for graphs with no induced rP_1+P_5, hereby extending the result of Hoàng, Kami´nski, Lozin, Sawada and Shu for graphs with no induced P_5. Our result is tight; we prove that for any graph H that is a supergraph of P_1 + P_5 with at least 5 edges, already LIST 5-COLORING is NP-complete for graphs with no induced H.  
  • Jean-François Couturier, Petr A. Golovach, Dieter Kratsch, Daniël Paulusma, List Coloring in the Absence of a Linear Forest, Algorithmica (2015) 71:2135
  • 13.05.2015,
    Krzysztof Kulig
    Metrical Service Systems with Multiple Servers
    The problem of metrical service systems with multiple servers ((k,l)-MSSMS), proposed by Feuerstein (LATIN’98: Theoretical Informatics, Third Latin American Symposium, 1998), is to service requests, each of which is an l-point subset of a metric space, using k servers in an online manner, minimizing the distance traveled by the servers. We prove that Feuerstein’s deterministic algorithm for (k,l)- MSSMS actually achieves an improved competitive ratio of k\cdot({k+l}\choose{l})-1) on uniform metrics.  
  • Ashish Chiplunkar, Sundar Vishwanathan, Metrical Service Systems with Multiple Servers, Algorithmica (2015) 71:219231
  • 06.05.2015,
    Maciej Solon
    Minimum Fill-in of Sparse Graphs: Kernelization and Approximation
    The MINIMUM FILL-IN problem is to decide if a graph can be triangulated by adding at most k edges. The problem has important applications in numerical algebra, in particular in sparse matrix computations. We develop kernelization algorithms for the problem on several classes of sparse graphs. We obtain linear kernels on planar graphs, and kernels of size O(k^{3/2}) in graphs excluding some fixed graph as a minor and in graphs of bounded degeneracy. As a byproduct of our results, we obtain approximation algorithms with approximation ratios O(log k) on planar graphs and O(√k·log k) on H-minor-free graphs. These results significantly improve the previously known kernelization and approximation results for MINIMUM FILL-IN on sparse graphs.  
  • Fedor V. Fomin, Geevarghese Philip, Yngve Villanger; Minimum Fill-in of Sparse Graphs: Kernelization and Approximation, Algorithmica (2015) 71:120
  • 29.04.2015,
    Agnieszka Łupińska
    Strong Conflict-Free Coloring for Intervals
    We consider the k-strong conflict-free (k-SCF) coloring of a set of points on a line with respect to a family of intervals: Each point on the line must be assigned a color so that the coloring is conflict-free in the following sense: in every interval I of the family there are at least k colors each appearing exactly once in I .We first present a polynomial-time approximation algorithm for the general problem; the algorithm has approximation ratio 2 when k=1 and 5−2/k when k≥2. In the special case of a family that contains all possible intervals on the given set of points, we show that a 2-approximation algorithm exists, for any k≥1. We also provide, in case k=O(polylog(n)), a quasipolynomial time algorithm to decide the existence of a k-SCF coloring that uses at most q colors.  
  • Panagiotis Cheilaris, Luisa Gargano, Adele A. Rescigno, Shakhar Smorodinsky, Strong Conflict-Free Coloring for Intervals, Algorithmica (2014) 70:732-749
  • 22.04.2015,
    Marcin Regdos
    An O(n^4) Time Algorithm to Compute the Bisection Width of Solid Grid Graphs
    The bisection problem asks for a partition of the n vertices of a graph into two sets of size at most \ceil{n/2}, so that the number of edges connecting the sets is minimised. A grid graph is a finite connected subgraph of the infinite two-dimensional grid. It is called solid if it has no holes. Papadimitriou and Sideri (Theory Comput Syst 29:97110, 1996) gave an O(n^5) time algorithm to solve the bisection problem on solid grid graphs. We propose a novel approach that exploits structural properties of optimal cuts within a dynamic program. We show that our new technique leads to an O(n^4) time algorithm.  
  • Andreas Emil Feldmann, Peter Widmayer, An O(n^4) Time Algorithm to Compute the Bisection Width of Solid Grid Graphs, Algorithmica (2015) 71:181-200
  • 15.04.2015,
    Maciej Bendkowski
    Contention Resolution under Selfishness
    In many communications settings, such as wired and wireless local-area networks, when multiple users attempt to access a communication channel at the same time, a conflict results and none of the communications are successful. Contention resolution is the study of distributed transmission and retransmission protocols designed to maximize notions of utility such as channel utilization in the face of blocking communications.

    An additional issue to be considered in the design of such protocols is that selfish users may have incentive to deviate from the prescribed behavior, if another transmission strategy increases their utility. The work of Fiat et al. (in SODA’07, pp.179188, SIAM, Philadelphia 2007) addresses this issue by constructing an asymptotically optimal incentive-compatible protocol. However, their protocol assumes the cost of any single transmission is zero, and the protocol completely collapses under non-zero transmission costs.

    In this paper we treat the case of non-zero transmission cost c.We present asymptotically optimal contention resolution protocols that are robust to selfish users, in two different channel feedback models. Our main result is in the Collision Multiplicity Feedback model, where after each time slot, the number of attempted transmissions is returned as feedback to the users. In this setting, we give a protocol that has expected cost Θ(n+c·log n) and is in o(1)-equilibrium, where n is the number of users.  

  • George Christodoulou, Katrina Ligett, Evangelia Pyrga, Contention Resolution under Selfishness, Algorithmica (2014) 70:675693
  • 08.04.2015,
    Patryk Mikos
    Randomized Online Graph Coloring
    Lech Duraj, Grzegorz Gutowski, Jakub Kozik
    Chip games and paintability
    We present a natural family of chip games with strong ties to paintability, on-line 2-coloring of hypergraphs and Maker-Braker games. We solve some of those games and as a result we obtain interesting results in aforementioned areas. One of those results is that the difference between paintabilty and choosability of a graph can be arbitrarily large.
    Jarosław Duda
    Asymmetric Numeral Systems: adding fractional bits to Huffman coder
    Entropy coding is an integral part of most data compression systems. There were previously used mainly two approaches: Huffman coding which is fast but approximates probabilities with powers of 1/2 (suboptimal compression ratio), and arithmetic coding which uses nearly accurate probabilities at cost of being an order of magnitude slower (more expensive). I will talk about new approach: Asymmetric Numeral Systems (ANS), which while using nearly accurate probabilities, has turned out to allow for even faster implementations than Huffman coding. Consequently, succeeding compressors have already switched to ANS in recent months.  
    Piotr Danilewski Universität des Saarlandes
    AnyDSL - a host for any language
    In a multi-domain project, there is no single programming language that can be used to program everything. Instead, a combination of general-purpose and domain-specific languages (DSLs) are used. Unfortunately, many domains lack a good, representative DSL, due to domain diversity and difficulty of creating a new compiler. Moreover, the communication across the languages is limited, often requiring the data to be serialized, and reducing the quality of optimization and compile-time verification.

    In our talk we present our approach to solve these problems, by introducing a new metamorphic language - AnyDSL. The parsing and execution of AnyDSL can be interleaved and the latter can influence the former - e.g. by introducing a new grammar with which parsing should resume. Regardless of the language the source is written in, all code is translated into a low-level, functional representation in continuous passing style (AIR). AIR serves as a "meeting point", permitting inter-DSL communication. AIR can be interpreted or compiled to LLVM bytecode and then to machine code.

    New grammars are defined also within AnyDSL. Unlike typical parser generators, grammar productions and actions are given as functions. AIR supports dynamic staging - a flexible way to define partial evaluation strategies. With it the overhead of having multiple layers of languages can be resolved early. It also allows the DSL designer to specify domain specific optimizations. After all those transformations, AIR can be compiled to machine code that is efficient, with performance comparable to programs written in general-purpose languages.

    In our talk we present a new metamorphic language - AnyDSL. AnyDSL permits the native parser to be exchanged with a custom DSL. Regardless of the DSL however, all code is translated into a low-level, functional representation in continuous passing style (AIR).

    New grammars are defined also within AnyDSL, but unlike typical parser generators, grammar productions and actions are given as functions. AIR supports dynamic staging - a flexible way to define partial evaluation strategies to resolve overheads and define domain specific optimizations. AIR can be compiled to machine code, and produced programs have performance comparable to those produced by general-purpose languages.  

    Michał Zając
    Improved Explicit Data Structures in the Bitprobe Model
    Buhrman et al. [SICOMP 2002] studied the membership problem in the bitprobe model, presenting both randomized and deterministic schemes for storing a set of size n from a universe of size m such that membership queries on the set can be answered using t bit probes. Since then, there have been several papers focusing on deterministic schemes, especially for the first non-trivial case when n=2. The most recent, due to Radhakrishnan, Shah, and Shannigrahi [ESA 2010], describes non-explicit schemes (existential results) for t≥3 using probabilistic arguments. We describe a fully explicit scheme for n=2 that matches their space bound of Θ(m^{2/5}) bits for t=3 and, furthermore, improves upon it for t>3, answering their open problem. Our structure (consisting of query and storage algorithms) manipulates blocks of bits of the query element in a novel way that may be of independent interest. We also describe recursive schemes for n≥3 that improve upon all previous fully explicit schemes for a wide range of parameters.  
  • Moshe Lewenstein, J. Ian Munro, Patrick K. Nicholson and Venkatesh Raman, Improved Explicit Data Structures in the Bitprobe Model, ESA 2014, LNCS 8737, pp. 630–641, 2014
  • 21.01.2015,
    Andrzej Głuszyński
    Data Structures for Storing Small Sets in the Bitprobe Model
    We study the following set membership problem in the bit probe model: given a set S from a finite universe U, represent it in memory so that membership queries of the form “Is x in S?” can be answered with a small number of bitprobes. We obtain explicit schemes that come close to the information theoretic lower bound of Buhrman et al. [STOC 2000, SICOMP 2002] and improve the results of Radhakrishnan et al. [ESA 2001] when the size of sets and the number of probes is small.

    We show that any scheme that stores sets of size two from a universe of size m and answers membership queries using two bitprobes requires space Ω(m^{4/7}). The previous best lower bound (shown by Buhrman et al. using information theoretic arguments) was Ω(√m). The same lower bound applies for larger sets using standard padding arguments. This is the first instance where the information theoretic lower bound is found to be not tight for adaptive schemes.

    We show that any non-adaptive three probe scheme for storing sets of size two from a universe of size m requires Ω(√m) bits of memory. This extends a result of Alon and Feige [SODA 2009] to small sets.  

  • Jaikumar Radhakrishnan, Smit Shah and Saswata Shannigrahi, Data Structures for Storing Small Sets in the Bitprobe Model, ESA 2010, Part II, LNCS 6347, pp. 159–170, 2010.
  • 14.01.2015,
    Andrzej Dorobisz
    Scheduling parallel jobs to minimize the makespan
    We consider the NP-hard problem of scheduling parallel jobs with release dates on identical parallel machines to minimize the makespan. A parallel job requires simultaneously a prespecified, job-dependent number of machines when being processed. We prove that the makespan of any nonpreemptive list-schedule is within a factor of 2 of the optimal preemptive makespan. This gives the best-known approximation algorithms for both the preemptive and the nonpreemptive variant of the problem. We also show that no list-scheduling algorithm can achieve a better performance guarantee than 2 for the nonpreemptive problem, no matter which priority list is chosen.

    List-scheduling also works in the online setting where jobs arrive over time and the length of a job becomes known only when it completes; it therefore yields a deterministic online algorithm with competitive ratio 2 as well. In addition, we consider a different online model in which jobs arrive one by one and need to be scheduled before the next job becomes known. We show that no list-scheduling algorithm has a constant competitive ratio. Still, we present the first online algorithm for scheduling parallel jobs with a constant competitive ratio in this context. We also prove a new information-theoretic lower bound of 2.25 for the competitive ratio of any deterministic online algorithm for this model. Moreover, we show that 6/5 is a lower bound for the competitive ratio of any deterministic online algorithm of the preemptive version of the model jobs arriving over time.  

  • Johannes Berit, Scheduling parallel jobs to minimize the makespan, J of Schedulling, 9(2006), 433–452
  • 07.01.2015,
    Łukasz Kapica
    On an on-line scheduling problem for parallel jobs
    The non-preemptive on-line scheduling of parallel jobs is addressed. In particular we assume that the release dates and the processing times of the jobs are unknown. It is already known that for this problem Garey and Graham’s list scheduling algorithm achieves the competitive factor 2−1/m for the makespan if m identical machines are available and if each job requires only a single machine for processing. Here, we show that the same factor also holds in the case of parallel jobs.  
  • Edwin Naroska, Uwe Schwiegelshohn, On an on-line scheduling problem for parallel jobs, Information Processing Letters, 81(2002), 297–304.
  • 17.12.2014,
    Bartosz Wlaczak
    Minors and dimension
    Streib and Trotter proved in 2012 that posets with bounded height and with planar cover graphs have bounded dimension. Later, Joret et al. proved that the dimension is bounded for posets with bounded height whose cover graphs have bounded tree-width. Recently, I proved that posets of bounded height whose cover graphs exclude a fixed (topological) minor have bounded dimension. This generalizes both the aforementioned results and verifies a conjecture of Joret et al. In this talk, I will introduce the problems of bounding the dimension of posets with sparse cover graphs and the main structural theorems used in the proof of the latter result: the Robertson-Seymour and Grohe-Marx structural decomposition theorems. I will also briefly describe the idea of the proof.  
    Tomasz Kołodziejski
    Opaque sets or how to find a pipe
    We'll tackle the problem of finding the smallest set in a given class that meets every line intersecting a given convex set. Such a set is know as a barrier. Particularly interesting barrier classes are: connected sets, poly-lines and arbitrary segment barriers. The algorithmic approach yields various approximation constants around 1.6. Little is known about the exact barriers even for simple figures. Algorithms and proofs will be presented most of which require only basic planar geometry knowledge will little calculus (Cauchy surface area formula will be presented with no proof).  
    Adam Gągol
    Very new results on graph sharing games
    Adam Polak
    On treewidth parametrization of nonpreemptive multicoloring problem
    In the multicoloring problem we are given a graph in which every vertex has some nonnegative integer demand. We have to assign to each vertex a set of colors of the size of the demand of this vertex, in such a way that the sets of any two neighboring vertices are disjoint. In the nonpreemptive version of the problem each set of colors has to be an interval of the natural numbers. The goal is either to minimize the sum of the assigned colors, or to minimize the number of different colors used. In this talk we will discuss the fixed parameter tractability of both these problems when parametrized by the treewidth of the input graph and the maximum demand, the treewidth and the number of different demands, and the treewidth itself.  
    Grzegorz Gutowski
    Open Problem Session
    A few interesting and promising open problems, including, but not limited to:
    * Coloring triangle-free graphs,
    * Complexity of graph classes defined by forbidden ordered subgraphs,
    * Reconstructing random strings from random substrings,
    * Scheduling multiprocessor jobs,
    * Storing small sets in just a few bits,
    * Colorful homomorphisms of planar graphs,
    * Domination games.  
    Radosław Smyrek
    Shortest Path Problems on a Polyhedral Surface (by Atlas F. Cook IV, CarolaWenk)
    We describe algorithms to compute edge sequences, a shortest path map, and the Fréchet distance for a convex polyhedral surface. Distances on the surface are measured by the length of a Euclidean shortest path. We describe how the star unfolding changes as a source point slides continuously along an edge of the convex polyhedral surface. We describe alternative algorithms to the edge sequence algorithm of Agarwal et al. (SIAM J. Comput. 26(6):1689–1713, 1997) for a convex polyhedral surface. Our approach uses persistent trees, star unfoldings, and kinetic Voronoi diagrams. We also show that the core of the star unfolding can overlap itself when the polyhedral surface is non-convex.  
  • Atlas F. Cook IV, CarolaWenk, Shortest Path Problems on a Polyhedral Surface, Algorithmica (2014) 69:58–77
  • 04.06.2014,
    Gabriel Fortin
    On Cutwidth Parameterized by Vertex Cover (by Marek Cygan et al.)
    We study the CUTWIDTH problem, where the input is a graph G, and the objective is find a linear layout of the vertices that minimizes the maximum number of edges intersected by any vertical line inserted between two consecutive vertices. We give an algorithm for CUTWIDTH with running time O(2^k n^O(1)). Here k is the size of a minimum vertex cover of the input graph G, and n is the number of vertices in G. Our algorithm gives an O(2^{n/2}n^O(1)) time algorithm for CUTWIDTH on bipartite graphs as a corollary. This is the first non-trivial exact exponential time algorithm for CUTWIDTH on a graph class where the problem remains NP-complete. Additionally, we show that CUTWIDTH parameterized by the size of the minimum vertex cover of the input graph does not admit a polynomial kernel unless NP ⊆ coNP/poly. Our kernelization lower bound contrasts with the recent results of Bodlaender et al. (ICALP, Springer, Berlin, 2011; SWAT, Springer, Berlin, 2012) that both TREEWIDTH and PATHWIDTH parameterized by vertex cover do admit polynomial kernels.  
  • Marek Cygan, Daniel Lokshtanov, Marcin Pilipczuk, Michał Pilipczuk, Saket Saurabh, On Cutwidth Parameterized by Vertex Cover, Algorithmica (2014) 68:940–953
  • 28.05.2014,
    Krzysztof Pasek
    Online Square Packing with Gravity (by S.P.Fekete, T.Kamphans, N.Schweer)
    We analyze the problem of packing squares in an online fashion: Given a semi-infinite strip of width 1 and an unknown sequence of squares of side length in [0, 1] that arrive from above, one at a time. The objective is to pack these items as they arrive, minimizing the resulting height. Just like in the classical game of Tetris, each square must be moved along a collision-free path to its final destination. In addition, we account for gravity in both motion (squares must never move up) and position (any final destination must be supported from below). A similar problem has been considered before; the best previous result is by Azar and Epstein, who gave a 4-competitive algorithm in a setting without gravity (i.e., with the possibility of letting squares “hang in the air”) based on ideas of shelf packing: Squares are assigned to different horizontal levels, allowing an analysis that is reminiscent of some binpacking arguments. We apply a geometric analysis to establish a competitive factor of 3.5 for the bottom-left heuristic and present a 34/13≈2.6154-competitive algorithm.  
  • Sándor P. Fekete, Tom Kamphans, Nils Schweer, Online Square Packing with Gravity, Algorithmica (2014) 68:1019–1044
  • 21.05.2014,
    Szymon Borak
    Competitive-reachability for special classes of graphs
    The reachability r(D) of a directed graph D is the number of ordered pairs of distinct vertices (x, y) with a directed path from x to y. Two players maximizer and minimizer play the following game on graph G. They orient the edges of G alternately until all edges of G have been oriented. The maximizer attempts to maximize the reachability, while the minimizer attempts to minimize the reachability, of the resulting digraph. If both players play optimally, then the reachability is fixed. Competitive-reachability is a value of reachability for the optimal play on graph G. We determine the competitive-reachability for outerplanar graphs and some other special classes of graphs.  
    Grzegorz Gutowski, Jakub Kozik
    Lower bound for on-line graph coloring of bipartite graphs
    In this talk we propose a strategy for Presenter in on-line graph coloring game. The strategy constructs bipartite graphs and forces any on-line coloring algorithm to use 2 log n - 10 colors, where n is the number of vertices in the constructed graph. This is best possible up to an additive constant.  
  • 16.04.2014,
    Arkadiusz Olek
    Better Approximation Algorithms for the Maximum Internal Spanning Tree Problem, (by M.Knauer, J.Spoerhase)
    We examine the problem of determining a spanning tree of a given graph such that the number of internal nodes is maximum. The best approximation algorithm known so far for this problem is due to Prieto and Sloper and has a ratio of 2. For graphs without pendant nodes, Salamon has lowered this factor to 7/4 by means of local search. However, the approximative behaviour of his algorithm on general graphs has remained open. In this paper we show that a simplified and faster version of Salamon’s algorithm yields a 5/3-approximation even on general graphs. In addition to this, we investigate a node weighted variant of the problem for which Salamon achieved a ratio of 2·Δ(G)−3. Extending Salamon’s approach we obtain a factor of 3+ε for any ε>0. We complement our results with worst case instances showing that our analyses are tight.  
  • Martin Knauer, Joachim Spoerhase, Better Approximation Algorithms for the Maximum Internal Spanning Tree Problem, Algorithmica, DOI 10.1007/s00453-013-9827-7
  • 09.04.2014,
    Adam Polak
    A Generalization of the Convex Kakeya Problem, (by Ahn et al.)
    Given a set of line segments in the plane, not necessarily finite, what is a convex region of smallest area that contains a translate of each input segment? This question can be seen as a generalization of Kakeya’s problem of finding a convex region of smallest area such that a needle can be rotated through 360 degrees within this region. We show that there is always an optimal region that is a triangle, and we give an optimal Θ(n log n)-time algorithm to compute such a triangle for a given set of n segments. We also show that, if the goal is to minimize the perimeter of the region instead of its area, then placing the segments with their midpoint at the origin and taking their convex hull results in an optimal solution. Finally, we show that for any compact convex figure G, the smallest enclosing disk of G is a smallest-perimeter region containing a translate of every rotated copy of G.  
  • Hee-Kap Ahn, SangWon Bae, Otfried Cheong, Joachim Gudmundsson, Takeshi Tokuyama, Antoine Vigneron, A Generalization of the Convex Kakeya Problem, Algorithmica, DOI 10.1007/s00453-013-9831-y
  • 26.03.2014,
    Jakub Adamek
    A Universal Randomized Packet Scheduling Algorithm (by Łukasz Jeż)
    We give a memoryless scale-invariant randomized algorithm REMIX for Packet Scheduling that is e/(e−1)-competitive against an adaptive adversary. REMIX unifies most of previously known randomized algorithms, and its general analysis yields improved performance guarantees for several restricted variants, including the s-bounded instances. In particular, REMIX attains the optimum competitive ratio of 4/3 on 2-bounded instances.

    Our results are applicable to a more general problem, called Item Collection, in which only the relative order between packets’ deadlines is known. REMIX is the optimal memoryless randomized algorithm against adaptive adversary for that problem  

  • Łukasz Jeż, A Universal Randomized Packet Scheduling Algorithm, Algorithmica (2013) 67:498–515, DOI 10.1007/s00453-012-9700-0
  • 19.03.2014,
    Michał Dyrek
    Balanced Partitions of Trees and Applications (by A.E.Feldmann, L.Foschini)
    We study the problem of finding the minimum number of edges that, when cut, form a partition of the vertices into k sets of equal size. This is called the k-BALANCED PARTITIONING problem. The problem is known to be inapproximable within any finite factor on general graphs, while little is known about restricted graph classes.

    We show that the k-BALANCED PARTITIONING problem remains APX-hard even when restricted to unweighted tree instances with constant maximum degree. If instead the diameter of the tree is constant we prove that the problem is NP-hard to approximate within n^c, for any constant c<1.

    If vertex sets are allowed to deviate from being equal-sized by a factor of at most 1+ε, we show that solutions can be computed on weighted trees with cut cost no worse than the minimum attainable when requiring equal-sized sets. This result is then extended to general graphs via decompositions into trees and improves the previously best approximation ratio from O(log^{3/2}(n)/ε^2) [Andreev and Räcke in Theory Comput. Syst. 39(6):929–939, 2006] to O(log n). This also settles the open problem of whether an algorithm exists for which the number of edges cut is independent of ε.  

  • Andreas Emil Feldmann, Luca Foschini, Balanced Partitions of Trees and Applications, Algorithmica DOI 10.1007/s00453-013-9802-3
  • 05.03.2014,
    Adam Gągol
    Natural proofs (by A. Razborov, S. Rudich)
    The notion of natural proof is introduced. We argue that the known proofs of lower bounds on the complexity of explicit Boolean functions in nonmonotone models fall within our definition of natural. We show, based on a hardness assumption, that natural proofs can not prove superpolynomial lower bounds for general circuits. Without the hardness assumption, we are able to show that they can not prove exponential lower bounds (for general circuits) for the discrete logarithm problem. We show that the weaker class of AC^0-natural proofs which is sufficient to prove the parity lower bounds of Furst, Saxe, and Sipser, Yao, and Hastad is inherently incapable of proving the bounds of Razborov and Smolensky. We give some formal evidence that natural proofs are indeed natural by showing that every formal complexity measure, which can prove superpolynomial lower bounds for a single function, can do so for almost all functions, which is one of the two requirements of a natural proof in our sense.  
  • Alexander A. Razborov, Steven Rudich, Natural proofs, Journal of Computer and System Sciences, 55(1997), 24-35
  • 22.01.2014,
    Maciej Solon
    Scheduling with an Orthogonal Resource Constraint
    We address a scheduling problem that arises in highly parallelized environments like modern multi-core CPU/GPU computer architectures where simultaneously active jobs share a common limited resource, e.g., memory cache. The scheduler must ensure that the demand for the common resource never exceeds the available capacity. This introduces an orthogonal constraint to the classical minimum makespan scheduling problem. Such a constraint also arises in other contexts where a common resource is shared across machines. We study the non-preemptive case of this problem and present a (2+\epsi)-approximation algorithm which relies on the interplay of several classical and modern techniques in scheduling like grouping, job-classification, and the use of configuration-LPs. This improves upon previous bound of 3 that can be obtained by list scheduling approaches, and gets close to the (3/2−\epsi)-inapproximability bound. If the number of machines or the number of different resource requirements are bounded by a constant we obtain a polynomial time approximation scheme.  
  • Martin Niemeier, Andreas Wiese, Scheduling with an Orthogonal Resource Constraint, Algorithmica, DOI 10.1007/s00453-013-9829-5
  • 15.01.2014,
    Michał Sapalski
    Linear-Time Algorithms for Tree Root Problems
    Let T be a tree on a set V of nodes. The p-th power T^p of T is the graph on V such that any two nodes u and w of V are adjacent in T^p if and only if the distance of u and w in T is at most p. Given an n-node m-edge graph G and a positive integer p, the p-th tree root problem asks for a tree T , if any, such that G = T^p. Given an n-node m-edge graph G, the tree root problem asks for a positive integer p and a tree T , if any, such that G = T^p. Kearney and Corneil gave the best previously known algorithms for both problems. Their algorithm for the former (respectively, latter) problem runs in O(n^3) (respectively, O(n^4)) time. In this paper, we give O(n+m)-time algorithms for both problems.  
  • Maw-Shang Chang, Ming-Tat Ko, Hsueh-I Lu, Linear-Time Algorithms for Tree Root Problems, Algorithmica, DOI 10.1007/s00453-013-9815-y
  • 08.01.2014,
    Andrzej Dorobisz
    Linear Recognition and Embedding of Fibonacci Cubes
    Fibonacci strings are binary strings that contain no two consecutive 1s. The Fibonacci cube Γ_h is the subgraph of the h-cube induced by the Fibonacci strings. These graphs are applicable as interconnection networks and in theoretical chemistry, and lead to the Fibonacci dimension of a graph. We derive a new characterization of Fibonacci cubes. The characterization is the basis for an algorithm which recognizes these graphs in linear time. Moreover, a graph which was recognized as a Fibonacci cube can be embedded into a hypercube using Fibonacci strings within the same time bound.  
  • Aleksander Vesel, Linear Recognition and Embedding of Fibonacci Cubes, Algorithmica, DOI 10.1007/s00453-013-9839-3
  • 18.12.2013,
    Bartłomiej Ryniec
    Preprocess, Set, Query!
    Thorup and Zwick (J.ACM 52(1):1–24, 2005 and STOC’01) in their seminal work introduced the notion of distance oracles. Given an n-vertex weighted undirected graph with m edges, they show that for any integer k ≥ 1 it is possible to preprocess the graph in O˜(m n^{1/k}) time and generate a compact data structure of size O(k n^{1+1/k}). For each pair of vertices, it is then possible to retrieve an estimated distance with multiplicative stretch 2k−1 in O(k) time. For k=2 this gives an oracle of O(n^{1.5}) size that produces in constant time estimated distances with stretch 3.
    Recently, Patrascu and Roditty (In: Proc. of 51st FOCS, 2010) broke the theoretical status-quo in the field of distance oracles and obtained a distance oracle for sparse unweighted graphs of O(n^{5/3}) size that produces in constant time estimated distances with stretch 2.
    In this paper we show that it is possible to break the stretch 2 barrier at the price of non-constant query time in unweighted undirected graphs.We present a data structure that produces estimated distances with 1+ε stretch. The size of the data structure is O(n m^{1−ε'}) and the query time is O˜(m^{1−ε'}). Using it for sparse unweighted graphs we can get a data structure of size O(n^{1.87}) that can supply in O(n^{0.87}) time estimated distances with multiplicative stretch 1.75.  
  • Ely Porat, Liam Roditty, Preprocess, Set, Query! Algorithmica (2013) 67:516–528
  • 11.12.2013,
    Agnieszka Dymel
    Online Coloring of Bipartite Graphs with and without Advice
    In the online version of the well-known graph coloring problem, the vertices appear one after the other together with the edges to the already known vertices and have to be irrevocably colored immediately after their appearance. We consider this problem on bipartite, i.e., two-colorable graphs. We prove that at least \floor{1.13746·log_2(n) − 0.49887} colors are necessary for any deterministic online algorithm to be able to color any given bipartite graph on n vertices, thus improving on the previously known lower bound of \floor{log_2(n)}+1 for sufficiently large n.
    Recently, the advice complexity was introduced as a method for a fine-grained analysis of the hardness of online problems. We apply this method to the online coloring problem and prove (almost) tight linear upper and lower bounds on the advice complexity of coloring a bipartite graph online optimally or using 3 colors. Moreover, we prove that O(√n) advice bits are sufficient for coloring any bipartite graph on n vertices with at most \ceil{log_2(n)} colors.  
  • Maria Paola Bianchi, Hans-Joachim Böckenhauer, Juraj Hromkovic, Lucia Keller, Online Coloring of Bipartite Graphs with and without Advice Algorithmica, DOI 10.1007/s00453-013-9819-7
  • 04.12.2013,
    Sebastian Syta
    Online Unweighted Knapsack Problem with Removal Cost
    We study the online unweighted knapsack problem with removal cost. The input is a sequence of items u_1,u_2,...,u_n, each of which has a size and a value, where the value of each item is assumed to be equal to the size. Given the ith item u_i, we either put u_i into the knapsack or reject it with no cost. When u_i is put into the knapsack, some items in the knapsack are removed with removal cost if the sum of the size of u_i and the total size in the current knapsack exceeds the capacity of the knapsack. Here the removal cost means a cancellation charge or disposal fee. Our goal is to maximize the profit, i.e., the sum of the values of items in the last knapsack minus the total removal cost occurred.
    We consider two kinds of removal cost: unit and proportional cost. For both models, we provide their competitive ratios. Namely, we construct optimal online algorithms and prove that they are best possible.  
  • Xin Han, Yasushi Kawase, Kazuhisa Makino, Online Unweighted Knapsack Problem with Removal Cost, Algorithmica DOI 10.1007/s00453-013-9822-z
  • 27.11.2013,
    Michał Bejda
    Data Structures on Event Graphs
    We investigate the behavior of data structures when the input and operations are generated by an event graph. This model is inspired by Markov chains. We are given a fixed graph G, whose nodes are annotated with operations of the type insert, delete, and query. The algorithm responds to the requests as it encounters them during a (random or adversarial) walk in G. We study the limit behavior of such a walk and give an efficient algorithm for recognizing which structures can be generated. We also give a near-optimal algorithm for successor searching if the event graph is a cycle and the walk is adversarial. For a random walk, the algorithm becomes optimal.  
  • Bernard Chazelle, Wolfgang Mulzer, Data Structures on Event Graphs, Algorithmica DOI 10.1007/s00453-013-9838-4
  • 20.11.2013,
    Damian Krolik
    Parameterized Analysis of Paging and List Update Algorithms
    It is well-established that input sequences for paging and list update have locality of reference. In this paper we analyze the performance of algorithms for these problems in terms of the amount of locality in the input sequence. We define a measure for locality that is based on Denning’s working set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results.  
  • Reza Dorrigiv, Martin R. Ehmsen, Alejandro López-Ortiz, Parameterized Analysis of Paging and List Update Algorithms, Algorithmica, DOI 10.1007/s00453-013-9800-5
  • 13.11.2013,
    Maciej Bendkowski
    Analyses of Cardinal Auctions
    We study cardinal auctions for selling multiple copies of a good, in which bidders specify not only their bid or how much they are ready to pay for the good, but also a cardinality constraint on the number of copies that will be sold via the auction. We perform first known Price of Anarchy type analyses with detailed comparison of the classical Vickrey-Clarke-Groves (VCG) auction and one based on minimum pay property (MPP) which is similar to Generalized Second Price auction commonly used in sponsored search. Without cardinality constraints, MPP has the same efficiency (total value to bidders) and at least as much revenue (total income to the auctioneer) as VCG; this also holds for certain other generalizations of MPP (e.g., prefix constrained auctions, as we show here). In contrast, our main results are that, with cardinality constraints,
    (a) equilibrium efficiency of MPP is 1/2 of that of VCG and this factor is tight,
    (b) in equilibrium MPP may collect as little as 1/2 the revenue of VCG.  
  • Mangesh Gupte, Darja Krushevskaja, S. Muthukrishnan, Analyses of Cardinal Auctions, Algorithmica DOI 10.1007/s00453-013-9832-x
  • 06.11.2013,
    Karol Różycki
    Oblivious Algorithms for the Maximum Directed Cut Problem
    We introduce a special family of randomized algorithms for Max DICUT that we call oblivious algorithms. Let the bias of a vertex be the ratio between the total weight of its outgoing edges and the total weight of all its edges. An oblivious algorithm selects at random in which side of the cut to place a vertex v, with probability that only depends on the bias of v, independently of other vertices. The reader may observe that the algorithm that ignores the bias and chooses each side with probability 1/2 has an approximation ratio of 1/4, whereas no oblivious algorithm can have an approximation ratio better than 1/2 (with an even directed cycle serving as a negative example). We attempt to characterize the best approximation ratio achievable by oblivious algorithms, and present results that are nearly tight. The paper also discusses natural extensions of the notion of oblivious algorithms, and extensions to the more general problem of Max 2-AND.  
  • Uriel Feige, Shlomo Jozeph, Oblivious Algorithms for the Maximum Directed Cut Problem, Algorithmica DOI 10.1007/s00453-013-9806-z
  • 30.10.2013,
    Igor Adamski
    Linked Dynamic Tries with Applications to LZ-Compression in Sublinear Time and Space
    The dynamic trie is a fundamental data structure with applications in many areas of computer science. This paper proposes a new technique for maintaining a dynamic trie T of size at most 2^w nodes under the unit-cost RAM model with a fixed word size w. It is based on the idea of partitioning T into a set of linked small tries, each of which can be maintained efficiently. Our method is not only space-efficient, but also allows the longest common prefix between any query pattern P and the strings currently stored in T to be computed in o(|P|) time for small alphabets, and allows any leaf to be inserted into or deleted from T in o(log|T|) time. To demonstrate the usefulness of our new data structure, we apply it to LZ-compression. Significantly, we obtain the first algorithm for generating the LZ78 encoding of a given string of length n over an alphabet of size σ in sublinear (o(n)) time and sublinear (o(n log σ) bits) working space for small alphabets
    (σ = 2^{o(log n \cdot \frac{log log log n}{(log log n)^2})).
    Moreover, the working space for our new algorithm is asymptotically less than or equal to the space for storing the output compressed text, regardless of the alphabet size.  
  • Jesper Jansson, Kunihiko Sadakane, Wing-Kin Sung, Linked Dynamic Tries with Applications to LZ-Compression in Sublinear Time and Space, Algorithmica DOI 10.1007/s00453-013-9836-6
  • 23.10.2013,
    Wojciech Łopata
    An Algorithmic Characterization of Polynomial Functions over Z_{p^n}
    We consider polynomial representability of functions defined over Z_{p^n}, where p is a prime and n is a positive integer. Our aim is to provide an algorithmic characterization that
    (i) answers the decision problem: to determine whether a given function over Z_{p^n} is polynomially representable or not,
    (ii) finds the polynomial if it is polynomially representable.
    The previous characterizations given by Kempner (Trans. Am. Math. Soc. 22(2):240–266, 1921) and Carlitz (Acta Arith. 9(1), 67–78, 1964) are existential in nature and only lead to an exhaustive search method, i.e. algorithm with complexity exponential in size of the input. Our characterization leads to an algorithm whose running time is linear in size of input. We also extend our result to the multivariate case.  
  • Ashwin Guha, Ambedkar Dukkipati, An Algorithmic Characterization of Polynomial Functions over Z_{p^n},
    Algorithmica DOI 10.1007/s00453-013-9799-7
  • 16.10.2013,
    Jerzy Marcinkowski
    University of Wrocław
    Finite Controllability and Bounded Derivation Depth
    FC (Finite controllability) and BDD (the Bounded Derivation Depth property) are two properties of Datalog/TGD programs.

    BDD is equivalent to Positive First Order rewritability -- the very useful property that allows us to use (all the optimizations of) DBMS in order to compute the certain answers to queries in the presence of a theory.

    Finite Controllability of a theory T means that if the certain answer to a query Q, for a database instance D , in the presence of T is 'no' then this 'no' is never a result of an unnatural assumption that the counterexample can be infinite.

    We conjecture that for any theory T the property BDD implies FC. We prove this conjecture for the case of binary signatures.  

  • Tomasz Gogacz, Jerzy Marcinkowski: On the BDD/FC conjecture. Proceedings of PODS 2013 (the 32nd ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems)
  • 14.10.2013,
    W. Hugh Woodin
    UC Berkeley
    The Continuum Hypothesis and the search for Mathematical Infinity
    new place and date: Oct 14, 2013, 16:00,
    Polska Akademia Umiejętności, Kraków, Sławkowska 17
    By middle of the 20th century the problem of the Continuum Hypothesis was widely regarded as one of the prominent problems in all of Mathematics. Remarkably, this question cannot be solved on the basis of the basic principles (these are the ZFC axioms) on which the entire subject is based. This discovery of Cohen, 50 years ago this summer, is arguably one of the major discoveries in the history of modern Mathematics.

    But does this mean that the question of the Continuum Hypothesis has no answer? Any "solution" must involve the adoption of new principles but which principles should one adopt? Alternatively, perhaps the correct assessment of Cohen's discovery is that the entire enterprise of the mathematical study of Infinity is ultimately doomed because the entire subject is simply a human fiction without any true foundation. In this case, any "solution" to the Continuum Hypothesis is just an arbitrary (human) choice.

    Over the last few years a scenario has emerged by which with the addition of a single new principle not only can the problem of the Continuum Hypothesis be resolved, but so can all of the other problems  which Cohen's method has been used to show are also unsolvable (and there have been many such problems).  Moreover the extension of the basic (ZFC) principles by this new principle would be seen as an absolutely compelling option based on the fundamental intuitions on which the entire mathematical conception of Infinity is founded.

    However, this scenario critically depends upon the outcome of a single conjecture. If this conjecture is false then the entire approach, which is the culmination of nearly 50 years of research, fails or at the very least  has to be significantly revised.

    Thus the mathematical study of Infinity has arguably reached a tipping point. But which way will it tip?  

    cancelled/moved to 14 X 2013
    Jean Cardinal
    Université Libre de Bruxelles
    On Universal Point Sets for Planar Graphs and Related Problems
    A set S of points in the plane is said to be n-universal if every planar graph on n vertices has a straight-line plane embedding on a subset of S. Finding the minimum size f(n) of an n-universal point set is a long-standing open problem, and the current upper and lower bounds differ by a linear factor. We will consider a lower bound technique that allowed us to prove that there is no n-universal point set of size n for any n>14. We will also describe recent results on families of planar graphs on n vertices that cannot be embedded on a common n-point set. This is a joint work with Michael Hoffmann and Vincent Kusters.  
    Marcin Ziemiński
    DAGGER: A Scalable Index for Reachability Queries in Large Dynamic Graphs
    With the ubiquity of large-scale graph data in a variety of application domains, querying them effectively is a challenge. In particular, reachability queries are becoming increasingly important, especially for containment, subsumption, and connectivity checks. Whereas many methods have been proposed for static graph reachability, many real-world graphs are constantly evolving, which calls for dynamic indexing. In this paper, we present a fully dynamic reachability index over dynamic graphs. Our method, called DAGGER, is a light-weight index based on interval labeling, that scales to million node graphs and beyond. Our extensive experimental evaluation on real-world and synthetic graphs confirms its effectiveness over baseline methods.  
  • Hilmi Yildirim, Vineet Chaoji, Mohammed J.Zaki, DAGGER: A Scalable Index for Reachability Queries in Large Dynamic Graphs, arXiv:1301.0977
  • 05.06.2013,
    Patryk Zaryjewski
    In-situ associative permuting
    The technique of in-situ associative permuting is introduced which is an association of in-situ permuting and in-situ inverting. It is suitable for associatively permutable permutations of {1,2,...,n} where the elements that will be inverted are negative and stored in order relative to each other according to their absolute values. Let K[1...n] be an array of n integer keys each in the range [1,n], and it is allowed to modify the keys in the range [-n,n]. If the integer keys are rearranged such that one of each distinct key having the value i is moved to the i'th position of K, then the resulting arrangement (will be denoted by K^P) can be transformed in-situ into associatively permutable permutation pi^P using only logn additional bits. The associatively permutable permutation pi^P not only stores the ranks of the keys of K^P but also uniquely represents K^P. Restoring the keys from pi^P is not considered. However, in-situ associative permuting pi^P in O(n) time using logn additional bits rearranges the elements of pi^P in order, as well as lets to restore the keys of K^P in O(n) further time using the inverses of the negative ranks. This means that an array of n integer keys each in the range [1,n] can be sorted using only logn bits of additional space.  
  • A. Emre Cetin, In-situ associative permuting, arXiv:1301.2046
  • 29.05.2013,
    Jakub Brzeski
    Parameterized Clique on Scale-Free Networks
    Finding cliques in graphs is a classical problem which is in general NP-hard and parameterized intractable. However, in typical applications like social networks or protein-protein interaction networks, the considered graphs are scale-free, i.e., their degree sequence follows a power law. Their speci c structure can be algorithmically exploited and makes it possible to solve clique much more eciently. We prove that on inhomogeneous random graphs with n nodes and power law exponent \gamma, cliques of size k can be found in time O(n^2) for \gamma≥3 and in time O(n exp(k^4)) for 2<\gamma< 3.  
  • Tobias Friedrich and Anton Krohmer, Parameterized Clique on Scale-Free Networks, Algorithms and Computation, LNCS 7676, 2012, pp 659-668
  • 22.05.2013,
    Przemysław Derengowski
    Proper Interval Vertex Deletion
    The NP-complete problem PROPER INTERVAL VERTEX DELETION is to decide whether an input graph on n vertices and m edges can be turned into a proper interval graph by deleting at most k vertices. Van Bevern et al. (In: Proceedings WG 2010. Lecture notes in computer science, vol. 410, pp.232–243, 2010) showed that this problem can be solved in O((14k+14)^{k+1}kn^6) time. We improve this result by presenting an O(6^kkn^6) time algorithm for PROPER INTERVAL VERTEX DELETION. Our fixed-parameter algorithm is based on a new structural result stating that every connected component of a {claw, net, tent, C_4, C_5, C_6}-free graph is a proper circular arc graph, combined with a simple greedy algorithm that solves PROPER INTERVAL VERTEX DELETION on {claw, net, tent, C_4, C_5, C_6}-free graphs in O(n+m) time. Our approach also yields a polynomial-time 6-approximation algorithm for the optimization variant of PROPER INTERVAL VERTEX DELETION.  
  • Pim van’t Hof, Yngve Villanger, Proper Interval Vertex Deletion, Algorithmica, DOI 10.1007/s00453-012-9661-3
  • 15.05.2013,
    Paweł Komosa
    Multicut viewed through the eyes of vertex cover
  • Jianer Chen, Jiahao Fany, Iyad A. Kanjz, Yang Liux, Fenghui Zhang, Multicut viewed through the eyes of vertex cover
  • 08.05.2013,
    Sebastian Syta
    A Sublinear Time Algorithm for PageRank Computations
    In a network, identifying all vertices whose PageRank is more than a given threshold value $\Delta$ is a basic problem that has arisen in Web and social network analyses. In this paper, we develop a nearly optimal, sublinear time, randomized algorithm for a close variant of this problem. When given a directed network \graph, a threshold value $\Delta$, and a positive constant $c>3$, with probability $1-o(1)$, our algorithm will return a subset $S\subseteq V$ with the property that $S$ contains all vertices of PageRank at least $\Delta$ and no vertex with PageRank less than $\Delta/c$. The running time of our algorithm is always $\tilde{O}(\frac{n}{\Delta})$. In addition, our algorithm can be efficiently implemented in various network access models including the Jump and Crawl query model recently studied by \cite{brautbar_kearns10}, making it suitable for dealing with large social and information networks. As part of our analysis, we show that any algorithm for solving this problem must have expected time complexity of ${\Omega}(\frac{n}{\Delta})$. Thus, our algorithm is optimal up to logarithmic factors. Our algorithm (for identifying vertices with significant PageRank) applies a multi-scale sampling scheme that uses a fast personalized PageRank estimator as its main subroutine. For that, we develop a new local randomized algorithm for approximating personalized PageRank which is more robust than the earlier ones developed by Jeh and Widom \cite{JehW03} and by Andersen, Chung, and Lang \cite{AndersenCL06}.  
  • Christian Borgs, Michael Brautbar, Jennifer Chayes1, and Shang-Hua Teng, A Sublinear Time Algorithm for PageRank Computations, arXiv:1202.2771
  • 24.04.2013,
    Agnieszka Dymel
    A Simple 3-Edge-Connected Component Algorithm
    A simple linear-time algorithm for finding all the 3-edge-connected components of an undirected graph is presented. The algorithm performs only one depth-first search over the given graph. Previously best known algorithms perform multiple depth-first searches in multiple phases.  
  • Yung H.Tsin, A Simple 3-Edge-Connected Component Algorithm, Theory Comput. Systems 40(2007), 125-142
  • 17.04.2013,
    Tomasz Kołodziejski
    8/5 Approximation for TSP Paths
    We prove the approximation ratio 8/5 for the metric s-t-Path-TSP problem, and more generally for shortest connected T-joins. The algorithm that achieves this ratio is the simple ``Best of Many'' version of Christofides' algorithm (1976), suggested by An, Kleinberg and Shmoys (2012), which consists in determining the best Christofides s-t-tour out of those constructed from a family Fscr_{>0} of trees having a convex combination dominated by an optimal solution x^* of the fractional relaxation. They give the approximation guarantee sqrt{5}+1/2 for such an s-t--tour, which is the first improvement after the 5/3 guarantee of Hoogeveen's Christofides type algorithm (1991). Cheriyan, Friggstad and Gao (2012) extended this result to a 13/8-approximation of shortest connected T-joins, for |T|≥4.

    The ratio 8/5 is proved by simplifying and improving the approach of An, Kleinberg and Shmoys that consists in completing x^*/2 in order to dominate the cost of "parity correction" for spanning trees. We partition the edge-set of each spanning tree in Fscr_{>0} into an s-t--path (or more generally, into a T-join) and its complement, which induces a decomposition of x^*. This decomposition can be refined and then efficiently used to complete x^*/2 without using linear programming or particular properties of T, but by adding to each cut deficient for x^*/2 an individually tailored explicitly given vector, inherent in the problem.

    A simple example shows that the Best of Many Christofides algorithm may not find a shorter s-t--tour than 3/2 times the incidentally common optima of the problem and of its fractional relaxation.  

  • András Sebö, Eight-Fifth Approximation for TSP Paths, Integer Programming and Combinatorial Optimization, LNCS7801, 2013, pp 362-374
  • 10.04.2013,
    Szymon Borak
    On dominating sets of maximal outerplanar graphs
    A dominating set of a graph is a set S of vertices such that every vertex in the graph is either in S or is adjacent to a vertex in S. The domination number of a graph G, denoted gamma(G), is the minimum cardinality of a dominating set of G. We show that if G is an n-vertex maximal outerplanar graph, then gamma(G)≤(n+t)/4, where t is the number of vertices of degree 2 in G. We show that this bound is tight for all t≥2. Upper-bounds for gamma(G) are known for a few classes of graphs.  
  • C.N.Campos and Y.Wakabayashi, On dominating sets of maximal outerplanar graphs, Discrete Appl.Math. 161(2013), 330-335
  • 03.04.2013,
    Aneta Pawłowska
    A Randomized O(log^2 k)-Competitive Algorithm for Metric Bipartite Matching
    We consider the online metric matching problem in which we are given a metric space, k of whose points are designated as servers. Over time, up to k requests arrive at an arbitrary subset of points in the metric space, and each request must be matched to a server immediately upon arrival, subject to the constraint that at most one request is matched to any particular server. Matching decisions are irrevocable and the goal is to minimize the sum of distances between the requests and their matched servers.

    We give an O(log^2 k)-competitive randomized algorithm for the online metric matching problem. This improves upon the best known guarantee of O(log^3 k) on the competitive factor due to Meyerson, Nanavati and Poplawski (SODA’06). It is known that for this problem no deterministic algorithm can have a competitive better than 2k−1, and that no randomized algorithm can have a competitive ratio better than ln k.  

  • Nikhil Bansal, Niv Buchbinder, Anupam Gupta, Joseph (Seffi) Naor, A Randomized O(log^2 k)-Competitive Algorithm for Metric Bipartite Matching, Algorithmica, DOI 10.1007/s00453-012-9676-9
  • 27.03.2013,
    Michał Sapalski
    A Lower Bound of 1+ϕ for Truthful Scheduling Mechanisms
    We study the mechanism design version of the unrelated machines scheduling problem, which is at the core of Algorithmic Game Theory and was first proposed and studied in a seminal paper of Nisan and Ronen. We give an improved lower bound of 1+ϕ≈2.618 on the pproximation ratio of deterministic truthful mechanisms for the makespan. The proof is based on a recursive preprocessing argument which yields a strictly increasing series of new lower bounds for each fixed number of machines n≥4.  
  • Elias Koutsoupias, Angelina Vidali, A Lower Bound of 1+ϕ for Truthful Scheduling Mechanisms, Algorithmica, DOI 10.1007/s00453-012-9634-6
  • 13.03.2013,
    Grzegorz Gutowski,
    Jakub Kozik
    Tic-tac-toe and beyond....
    Michał Feret
    Relative Convex Hulls in Semi-Dynamic Arrangements
    We present a data structure for maintaining the geodesic hull of a set of points (sites) in the presence of pairwise noncrossing line segments (barriers) that subdivide a bounding box into simply connected faces. For m barriers and n sites, our data structure has O((m+n)log(n)) size. It supports a mixed sequence of O(m) barrier insertions and O(n) site deletions in O((m+n)polylog(mn)) total time, and answers analogues of standard convex hull queries in O(polylog(mn)) time.
    Our data structure supports a generalization of the sweep line technique, in which the sweep wavefront is a simple closed polygonal curve, and it sweeps a set of n points in the plane by simple moves. We reduce the total time of supporting m online moves of a polygonal wavefront sweep algorithm from the naïve O(m√n polylog(n)) to O((m+n)polylog(mn)).  
  • Mashhood Ishaque, Csaba D. Tóth, Relative Convex Hulls in Semi-Dynamic Arrangements, Algorithmica DOI 10.1007/s00453-012-9679-6
  • 16.01.2013,
    Jacek Szmigiel
    An Optimal Lower Bound for Buffer Management in Multi-Queue Switches
    In the online packet buffering problem (also known as the unweighted FIFO variant of buffer management), we focus on a single network packet switching device with several input ports and one output port. This device forwards unit-size, unit-value packets from input ports to the output port. Buffers attached to input ports may accumulate incoming packets for later transmission; if they cannot accommodate all incoming packets, their excess is lost. A packet buffering algorithm has to choose from which buffers to transmit packets in order to minimize the number of lost packets and thus maximize the throughput.
    We present a tight lower bound of e/(e−1) ≈ 1.582 on the competitive ratio of the throughput maximization, which holds even for fractional or randomized algorithms. This improves the previously best known lower bound of 1.4659 and matches the performance of the algorithm RANDOM SCHEDULE. Our result contradicts the claimed performance of the algorithm RANDOM PERMUTATION; we point out a flaw in its original analysis.  
  • Marcin Bieńkowski, An Optimal Lower Bound for Buffer Management in Multi-Queue Switches, Algorithmica DOI 10.1007/s00453-012-9677-8
  • 09.01.2013,
    Jakub Adamek
    A Distributed O(1)-Approximation Algorithm for the Uniform Facility Location Problem
    We investigate a metric facility location problem in a distributed setting. In this problem, we assume that each point is a client as well as a potential location for a facility and that the opening costs for the facilities and the demands of the clients are uniform. The goal is to open a subset of the input points as facilities such that the accumulated cost for the whole point set, consisting of the opening costs for the facilities and the connection costs for the clients, is minimized.
    We present a randomized distributed algorithm that computes in expectation an O(1)-approximate solution to the metric facility location problem described above. Our algorithm works in a synchronous message passing model, where each point is an autonomous computational entity that has its own local memory and that communicates with the other entities by message passing.We assume that each entity knows the distance to all the other entities, but does not know any of the other pairwise distances. Our algorithm uses three rounds of all-to-all communication with message sizes bounded to O(log(n)) bits, where n is the number of input points.
    We extend our distributed algorithm to constant powers of metric spaces. For a metric exponent l≥1, we obtain a randomized O(1)-approximation algorithm that uses three rounds of all-to-all communication with message sizes bounded to O(log(n)) bits.  
  • Joachim Gehweiler, Christiane Lammersen, Christian Sohler, A Distributed O(1)-Approximation Algorithm for the Uniform Facility Location Problem, Algorithmica DOI 10.1007/s00453-012-9690-y
  • 19.12.2012,
    Radoslaw Smyrek
    Recognizing d-Interval Graphs and d-Track Interval Graphs
    A d-interval is the union of d disjoint intervals on the real line. A d-track interval is the union of d disjoint intervals on d disjoint parallel lines called tracks, one interval on each track. As generalizations of the ubiquitous interval graphs, d-interval graphs and d-track interval graphs have wide applications, traditionally to scheduling and resource allocation, and more recently to bioinformatics. In this paper, we prove that recognizing d-track interval graphs is NP-complete for any constant d≥2. This confirms a conjecture of Gyárfás and West in 1995. Previously only the complexity of the case d=2 was known. Our proof in fact implies that several restricted variants of this graph recognition problem, i.e., recognizing balanced d-track interval graphs, unit d-track interval graphs, and (2,..., 2) d-track interval graphs, are all NP-complete. This partially answers another question recently raised by Gambette and Vialette. We also prove that recognizing depth-two 2-track interval graphs is NP-complete, even for the unit case. In sharp contrast, we present a simple linear-time algorithm for recognizing depth-two unit d-interval graphs. These and other results of ours give partial answers to a question of West and Shmoys in 1984 and a similar question of Gyárfás and West in 1995. Finally, we give the first bounds on the track number and the unit track number of a graph in terms of the number of vertices, the number of edges, and the maximum degree, and link the two numbers to the classical concepts of arboricity.  
  • Minghui Jiang: Recognizing d-Interval Graphs and d-Track Interval Graphs, Algorithmica DOI 10.1007/s00453-012-9651-5
  • 12.12.2012,
    Jarosław Bielenin
    Graph Balancing: A Special Case of Scheduling Unrelated Parallel Machines
    We design a 1.75-approximation algorithm for a special case of scheduling parallel machines to minimize the makespan, namely the case where each job can be assigned to at most two machines, with the same processing time on either machine. (This is a special case of so-called restricted assignment, where the set of eligible machines can be arbitrary for each job.) This is the first improvement of the approximation ratio 2 of Lenstra, Shmoys, and Tardos (Math. Program. 46:259–271, 1990), to a smaller constant for any special case with unbounded number of machines and unbounded processing times.We conclude by showing integrality gaps of several relaxations of related problems.  
  • Tomáš Ebenlendr, Marek Krˇcál, Jiˇrí Sgall: Graph Balancing: A Special Case of Scheduling Unrelated Parallel Machines, Algorithmica DOI 10.1007/s00453-012-9668-9
  • 05.12.2012,
    Michał Masłowski
    On the Exact Complexity of Evaluating Quantified k-CNF
    We relate the exponential complexities 2^{s(k)n} of k-SAT and the exponential complexity 2^{s(EVAL(Π_2 3-CNF))n} of EVAL(Π_2 3-CNF) (the problem of evaluating quantified formulas of the form ∀x∃yF(x,y) where F is a 3-CNF in x-variables and y-variables) and show that s(∞) (the limit of s(k) as k→∞) is at most s(EVAL(Π_2 3-CNF)). Therefore, if we assume the Strong Exponential-Time Hypothesis, then there is no algorithm for EVAL(Π_2 3-CNF) running in time 2^{cn} with c<1.
    On the other hand, a nontrivial exponential-time algorithm for EVAL(Π_2 3-CNF) would provide a k-SAT solver with better exponent than all current algorithms for sufficiently large k. We also show several syntactic restrictions of the evaluation problem EVAL(Π_2 3-CNF) have nontrivial algorithms, and provide strong evidence that the hardest cases of EVAL(Π_2 3-CNF) must have a mixture of clauses of two types: one universally quantified literal and two existentially quantified literals, or only existentially quantified literals. Moreover, the hardest cases must have at least n−o(n) universally quantified variables, and hence only o(n) existentially quantified variables. Our proofs involve the construction of efficient minimally unsatisfiable k-CNFs and the application of the Sparsification lemma.  
  • Chris Calabro, Russell Impagliazzo, Ramamohan Paturi, On the Exact Complexity of Evaluating Quantified k-CNF, Algorithmica DOI 10.1007/s00453-012-9648-0
  • 28.11.2012,
    Agnieszka Łupińska
    Speed Scaling on Parallel Processors
    In this paper we investigate dynamic speed scaling, a technique to reduce energy consumption in variable-speed microprocessors. While prior research has focused mostly on single processor environments, in this paper we investigate multiprocessor settings. We study the basic problem of scheduling a set of jobs, each specified by a release date, a deadline and a processing volume, on variable-speed processors so as to minimize the total energy consumption.

    We first settle the problem complexity if unit size jobs have to be scheduled. More specifically, we devise a polynomial time algorithm for jobs with agreeable deadlines and prove NP-hardness results if jobs have arbitrary deadlines. For the latter setting we also develop a polynomial time algorithm achieving a constant factor approximation guarantee. Additionally, we study problem settings where jobs have arbitrary processing requirements and, again, develop constant factor approximation algorithms. We finally transform our offline algorithms into constant competitive online strategies.  

  • Susanne Albers, Fabian Müller, Swen Schmelzer, Speed Scaling on Parallel Processors, Algorithmica DOI 10.1007/s00453-012-9678-7
  • 21.11.2012,
    Gabriel Fortin
    3-Colouring AT-Free Graphs in Polynomial Time
    Determining the complexity of the colouring problem on AT-free graphs is one of long-standing open problems in algorithmic graph theory. One of the reasons behind this is that AT-free graphs are not necessarily perfect unlike many popular subclasses of AT-free graphs such as interval graphs or co-comparability graphs. In this paper, we resolve the smallest open case of this problem, and present the first polynomial time algorithm for the 3-colouring problem on AT-free graphs.  
  • Juraj Stacho, 3-Colouring AT-Free Graphs in Polynomial Time , Algorithmica (2012) 64:384–399
  • 14.11.2012,
    Łukasz Janiszewski
    The Complexity of the Empire Colouring Problem
    We investigate the computational complexity of the empire colouring problem (as defined by Percy Heawood in Q. J. Pure Appl. Math. 24:332–338, 1890) for maps containing empires formed by exactly r>1 countries each. We prove that the problem can be solved in polynomial time using s colours on maps whose underlying adjacency graph has no induced subgraph of average degree larger than s/r. However, if s≥3, the problem is NP-hard even if the graph is a forests of paths of arbitrary lengths (for any r≥2, provided s<2r−\sqrt{2r+1/4}+3/2. Furthermore we obtain a complete characterization of the problem’s complexity for the case when the input graph is a tree, whereas our result for arbitrary planar graphs fall just short of a similar dichotomy. Specifically, we prove that the empire colouring problem is NP-hard for trees, for any r≥2, if 3≤s≤2r−1 (and polynomial time solvable otherwise). For arbitrary planar graphs we prove NP-hardness if s<7 for r=2, and s<6r−3, for r≥3. The result for planar graphs also proves the NP-hardness of colouring with less than 7 colours graphs of thickness two and less than 6r−3 colours graphs of thickness r≥3.  
  • Andrew R.A. McGrae, Michele Zito, The Complexity of the Empire Colouring Problem, Algorithmica DOI 10.1007/s00453-012-9680-0
  • 07.11.2012,
    Maciej Bendkowski
    Sex-Equal Stable Matchings: Complexity and Exact Algorithms
    We explore the complexity and exact computation of a variant of the classical stable marriage problem in which we seek matchings that are not only stable, but are also “fair” in a formal sense. In particular, we study the sex-equal stable marriage problem (SESM), in which, roughly speaking, we wish to find a stable matching with the property that the men’s happiness is as close as possible to the women’s happiness. This problem is known to be strongly NP-hard  
  • Eric McDermid, Robert W. Irving, Sex-Equal Stable Matchings: Complexity and Exact Algorithms, Algorithmica, DOI 10.1007/s00453-012-9672-0
  • 31.10.2012,
    Tomasz Jurkiewicz
    Max Planck Institute for Informatics, Saarbrücken
    The Cost of Address Translation
    Modern computers are not random access machines (RAMs). They have a memory hierarchy, multiple cores, and virtual memory. We address the problem of the computational cost of address translation in virtual memory. Starting point for our work is the observation that the analysis of some simple algorithms (random scan of an array, binary search, heapsort) in either the RAM model or the EM model (external memory model) does not correctly predict growth rates of actual running times. We propose the VAT model (virtual address translation) to account for the cost of address translations and analyze the algorithms mentioned above and others in the model. The predictions agree with the measurements. We also analyze the VAT-cost of cache-oblivious algorithms.  
  • Tomasz Jurkiewicz and Kurt Mehlhorn, The Cost of Address Translation, ALENEX, January 2013.
  • 24.10.2012,
    Adam Polak
    Algorithms for Placing Monitors in a Flow Network
  • Francis Chin, Marek Chrobak, Li Yan, Algorithms for Placing Monitors in a Flow Network
  • 17.10.2012,
    Lech Duraj
    A linear algorithm for 3-letter LCWIS problem
    The problem of finding longest weakly increasing common subsequence (LCWIS) of two sequences is a variant of popular longest common subsequence (LCS) problem. While there are no known methods to find LCS in truly sub-quadratic time, there are faster algorithms to compute LCWIS if the alphabet size is small enough. We present a linear-time algorithm finding LCWIS over 3-letter alphabet. Up to now, the fastest known algorithm was O(min{m + n log n, m log log m}).  
    Ariel Gabizon
    Invertible Zero-Error Dispersers and Defective Memory with Stuck-At Errors
    Kuznetsov and Tsybakov considered the problem of storing information in a memory where a certain p-fraction of the n cells are `stuck' at certain values. The person writing in the memory - the `encoder'- knows which cells are stuck, and to what values. The person who will read the memory later - the `decoder' is required to retrieve the message encoded *without* the information about which cells are stuck.

    Kuznetsov and Tsybakov showed there are schemes where a message of length (1-p-o(1))*n can be encoded. We give the first such explicit schemes.

    Our schemes follow from a construction of an object called an `invertible zero-error disperser'.

    Joint work with Ronen Shaltiel.


    Szymon Borak
    Monadic Second Order Logic on Graphs with Local Cardinality Constraints
    We show that all problems of the following form can be solved in polynomial time for graphs of bounded treewidth: Given a graph G and for each vertex v of G a set α(v) of non-negative integers. Is there a set S of vertices or edges of G such that S satisfies a fixed property expressible in monadic second order logic, and for each vertex v of G the number of vertices/edges in S adjacent/incident with v belongs to the set α(v)? A wide range of problems can be formulated in this way, for example Lovasz’s General Factor Problem.  
  • Stefan Szeider, Monadic Second Order Logic on Graphs with Local Cardinality Constraints, LNCS 5162, pp. 601–612, 2008.
  • 13.06.2012,
    Marek Markiewicz
    Sharp Separation and Applications to Exact and Parameterized Algorithms
    Many divide-and-conquer algorithms employ the fact that the vertex set of a graph of bounded treewidth can be separated in two roughly balanced subsets by removing a small subset of vertices, referred to as a separator. In this paper we prove a trade-off between the size of the separator and the sharpness with which we can fix the size of the two sides of the partition. Our result appears to be a handy and powerful tool for the design of exact and parameterized algorithms for NP-hard problems. We illustrate that by presenting two applications.  
  • Fedor V. Fomin, Fabrizio Grandoni, Daniel Lokshtanov and Saket Saurabh, Sharp Separation and Applications to Exact and Parameterized Algorithms, Algorithmica, DOI 10.1007/s00453-011-9555-9
  • 06.06.2012,
    Andrzej Pezarski
    On-line clique covering of unit interval graphs
    We consider an on-line version of the minimal clique covering problem. We focus on a class of unit interval graphs and their different representations.

    It is known that all greedy algorithms solving this roblems use at least two times more cliques in the worst scenario than it is necessary in the optimal off-line solution. We introduce non-greedy approach, which leads us to construction of new better algorithms.

    We start with connected graphs presented in a connected way with their proper interval representations. For this case we show an algorithm using at worst 8/5 times more cliques than it is needed. Later, we generalize this solution to the case of non-connected graphs. This time, we obtain an algorithm using at worst 13/8 times more cliques than it is necessary. We also generalize both algorithms to work without interval representation. Finally, we move towards unit interval representation and present an algorithm using at most 8/5 times more cliques than needed.

    The performance of the algorithms is the best possible.


    Maciej Wawro
    Parameterized Complexity Results for General Factors in Bipartite Graphs with an Application to Constraint Programming
    The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U+V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint.

    We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V|, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.  

  • Gregory Gutin, Eun Jung Kim, Arezou Soleimanfallah, Stefan Szeider and Anders Yeo, Parameterized Complexity Results for General Factors in Bipartite Graphs with an Application to Constraint Programming, Algorithmica, DOI 10.1007/s00453-011-9548-8
  • 16.05.2012,
    Kasper Kopeć
    Minimum Weight Cycles and Triangles: Equivalences and Algorithms
    We consider the fundamental algorithmic problem of finding a cycle of minimum weight in a weighted graph. In particular, we show that the minimum weight cycle problem in an undirected n-node graph with edge weights in {1,...,M} or in a directed n-node graph with edge weights in {-M,..., M} and no negative cycles can be efficiently reduced to finding a minimum weight triangle in an Theta(n)-node undirected graph with weights in {1,...,O(M)}. Roughly speaking, our reductions imply the following surprising phenomenon: a minimum cycle with an arbitrary number of weighted edges can be "encoded" using only three edges within roughly the same weight interval! This resolves a longstanding open problem posed by Itai and Rodeh [SIAM J. Computing 1978 and STOC'77].
    A direct consequence of our efficient reductions are O (Mn^{omega})-time algorithms using fast matrix multiplication (FMM) for finding a minimum weight cycle in both undirected graphs with integral weights from the interval [1,M] and directed graphs with integral weights from the interval [-M,M]. The latter seems to reveal a strong separation between the all pairs shortest paths (APSP) problem and the minimum weight cycle problem in directed graphs as the fastest known APSP algorithm has a running time of O(M^{0.681}n^{2.575}) by Zwick [J. ACM 2002].
    > In contrast, when only combinatorial algorithms are allowed (that is, without FMM) the only known solution to minimum weight cycle is by computing APSP. Interestingly, any separation between the two problems in this case would be an amazing breakthrough as by a recent paper by Vassilevska W. and Williams [FOCS'10], any O(n^{3-eps})-time algorithm (eps>0) for minimum weight cycle immediately implies a O(n^{3-delta})-time algorithm (delta>0) for APSP.
  • Liam Roditty and Virginia Vassilevska Williams, Minimum Weight Cycles and Triangles: Equivalences and Algorithms,
  • 09.05.2012,
    Maciej Gawron
    An exact algorithm for the Maximum Leaf Spanning Tree problem
    Given an undirected graph with n vertices, the Maximum Leaf Spanning Tree problem is to find a spanning tree with as many leaves as possible. When parameterized in the number of leaves k, this problem can be solved in time O(4^k poly(n)) using a simple branching algorithm introduced by a subset of the authors (Kneis et al. 2008). Daligault et al. (2010) improved the branching and obtained a running time of O(3.72^k poly(n)). In this paper, we study the problem from an exponential time viewpoint, where it is equivalent to the Connected Dominating Set problem. Here, Fomin, Grandoni, and Kratsch showed how to break the Ω(2^n) barrier and proposed an O(1.9407^n)-time algorithm (Fomin et al. 2008). Based on some useful properties of Kneis et al. (2008) and Daligault et al. (2010), we present a branching algorithm whose running time of O(1.8966^n) has been analyzed using the Measure-and-Conquer technique. Finally, we provide a lower bound of Ω(1.4422^n) for the worst case running time of our algorithm.  
  • Henning Fernau, Joachim Kneis, Dieter Kratsch, Alexander Langer, Mathieu Liedloff, Daniel Raible, Peter Rossmanith, An exact algorithm for the Maximum Leaf Spanning Tree problem, Theoretical Computer Science 412(2011) 6290–6302
  • 25.04.2012,
    Gwenaël Joret
    Sorting under Partial Information (without the Ellipsoid Algorithm)
    We revisit the well-known problem of sorting under partial information: sort a finite set given the outcomes of comparisons between some pairs of elements. The input is a partially ordered set P, and solving the problem amounts to discovering an unknown linear extension of P, using pairwise comparisons. The information-theoretic lower bound on the number of comparisons needed in the worst case is log e(P), the binary logarithm of the number of linear extensions of P. In a breakthrough paper, Jeff Kahn and Jeong Han Kim (STOC 1992) showed that there exists a polynomial-time algorithm for the problem achieving this bound up to a constant factor. Their algorithm invokes the ellipsoid algorithm at each iteration for determining the next comparison, making it impractical. In this talk, we describe a simple and efficient algorithm for sorting under partial information. Like Kahn and Kim, our approach relies on graph entropy. However, our algorithm differs in essential ways from theirs: Rather than resorting to convex programming for computing the entropy, we approximate the entropy, or make sure it is computed in a restricted class of graphs, permitting the use of a simpler algorithm. Furthermore, we compute or approximate the entropy at most twice, instead of doing it at each iteration.

    Joint work with J. Cardinal, S. Fiorini, R. M. Jungers, and J. I. Munro.  
    Bartosz Szabucki
    Fast Minor Testing in Planar Graphs
    Minor Containment is a fundamental problem in Algorithmic Graph Theory used as a subroutine in numerous graph algorithms. A model of a graph H in a graph G is a set of disjoint connected subgraphs of G indexed by the vertices of H, such that if {u,v} is an edge of H, then there is an edge of G between components C_u and C_v. A graph H is a minor of G if G contains a model of H as a subgraph. We give an algorithm that, given a planar n-vertex graph G and an h-vertex graph H, either finds in time O(2^O(h)·n+n^2·log n) a model of H in G, or correctly concludes that G does not contain H as a minor. Our algorithm is the first single-exponential algorithm for this problem and improves all previous minor testing algorithms in planar graphs. Our technique is based on a novel approach called partially embedded dynamic programming.  
  • Isolde Adler, Frederic Dorn, Fedor V. Fomin, Ignasi Sau and Dimitrios M. Thilikos, Fast Minor Testing in Planar Graphs, Algorithmica, DOI 10.1007/s00453-011-9563-9
  • 11.04.2012,
    Robert Obryk
    Wait-free parallel summing snapshot
    Atomic snapshot[1] is a well known parallel data structure that implements two operations: update which updates a thread's value and scan which returns an array of all threads' values. A wait-free implementation of this structure using O(1) time for update and O(n) time for scan in O(n) memory is known[2]. In this talk we will present an implementation of a similar structure, where scan returns not the whole array, but its sum (or the result of applying any other associative operation to its elements). The structure uses O(n) memory, update runs in O(log n) time and scan runs in O(1) time. We will also present an implementation of a structure, which has an atomic update-and-scan operation. Its memory complexity is O(n log^2 n), and time complexity of the operation is O(log^2 n). We will show how to implement that structure on machines with small word size without sacrificing wait-freeness nor complexity.  
  • 04.04.2012,
    Piotr Wójcik
    Algorithmic Meta-theorems for Restrictions of Treewidth
    ``Possibly the most famous algorithmic meta-theorem is Courcelle’s theorem, which states that all MSO-expressible graph properties are decidable in linear time for graphs of bounded treewidth. Unfortunately, the running time’s dependence on the formula describing the problem is in general a tower of exponentials of unbounded height, and there exist lower bounds proving that this cannot be improved even if we restrict ourselves to deciding FO logic on trees.

    We investigate whether this parameter dependence can be improved by focusing on two proper subclasses of the class of bounded treewidth graphs: graphs of bounded vertex cover and graphs of bounded max-leaf number.We prove stronger algorithmic meta-theorems for these more restricted classes of graphs.More specifically, we show it is possible to decide any FO property in both of these classes with a singly exponential parameter dependence and that it is possible to decide MSO logic on graphs of bounded vertex cover with a doubly exponential parameter dependence. We also prove lower bound results which show that our upper bounds cannot be improved significantly, under widely believed complexity assumptions. Our work addresses an open problem posed by Michael Fellows.''  

  • Michael Lampis, Algorithmic Meta-theorems for Restrictions of Treewidth, Algorithmica, DOI 10.1007/s00453-011-9554-x
  • 21.03.2012,
    Mikołaj Bojańczyk, UW
    Automata Theory in XML
    I will describe how finite automata are used in XML documents. The main idea is that an XML document is a finite tree, and therefore one can use tree automata to process XML documents. XML documents bring new questions that have not been previously studied by automata theory. Maybe the most interesting issue is that when modeling an XML document as a tree, the node labels come from an infinite alphabet. For instance, a node that stores a book in a bibliographic database comes with the book's ID. A typical query might check if the database contains two books with the same ID.  
    Bartłomiej Bosek,
    Grzegorz Matecki
    News on Semiantichains and Unichain Coverings
    The famous Dilworth's Theorem states that for any partial order the size of a largest antichain is equal to the size of a smallest chain covering. In the late '70s C.Greene and D.Kleitman successfully generalized Dilworth's Theorem which moved forward the studies of partially ordered sets. Later, Saks proved equivalent statement that in the product CxQ of a chain C and an arbitrary poset Q the size of maximum antichain equals the size of minimum chain covering with chains of the form {c}xC' and Cx{q} (called unichains). We study Saks-West's conjecture which gives a nice generalization of the previous theorem:

    For any two posets P and Q the size of a maximum semiantichain and the size of minimum unichain covering in the product PxQ are equal.

    Here, semiantichain means a set for which each two points are not in a common unichain. B.Bosek, S.Felsner, K.Knauer and G.Matecki found conditions on P and Q that imply the conjecture in case of several new classes of posets. However, they also found an example showing that in general this min-max relation is false. This finally disproofs 30 year old conjecture of M.Saks and D.West.

    Kamil Kraszewski
    New Lower Bound on the Maximum Number of Satisfied Clauses in Max-SAT and Its Algorithmic Applications
    A pair of unit clauses is called conflicting if it is of the form (x), (¯x). A CNF formula is unit-conflict free (UCF) if it contains no pair of conflicting unit clauses. Lieberherr and Specker (J. ACM 28:411–421, 1981) showed that for each UCF CNF formula with m clauses we can simultaneously satisfy at least ϕ'm clauses, where ϕ'=(√5−1)/2. We improve the Lieberherr-Specker bound by showing that for each UCF CNF formula F with m clauses we can find, in polynomial time, a subformula F' with m' clauses such that we can simultaneously satisfy at least ϕ'm+(1−ϕ')m'+(2−3ϕ')n"/2 clauses (in F), where n"cis the number of variables in F which are not in F'.We consider two parameterized versions of MAX-SAT, where the parameter is the number of satisfied clauses above the bounds m/2 and m(√5−1)/2. The former bound is tight for general formulas, and the later is tight for UCF formulas. Mahajan and Raman (J. Algorithms 31:335–354, 1999) showed that every instance of the first parameterized problem can be transformed, in polynomial time, into an equivalent one with at most 6k+3 variables and 10k clauses.We improve this to 4k variables and (2√5+4)k clauses. Mahajan and Raman conjectured that the second parameterized problem is fixed-parameter tractable (FPT). We show that the problem is indeed FPT by describing a polynomial-time algorithm that transforms any problem instance into an equivalent one with at most (7+3√5)k variables. Our results are obtained using our improvement of the Lieberherr-Specker bound above.  
  • Robert Crowston, Gregory Gutin, Mark Jones and Anders Yeo, A New Lower Bound on the Maximum Number of Satisfied Clauses in Max-SAT and Its Algorithmic Applications, Algorithmica DOI 10.1007/s00453-011-9550-1
  • 29.02.2012,
    Piotr Kołacz
    On-line approximate string matching with bounded errors
    We introduce a new dimension to the widely studied on-line approximate string matching problem, by introducing an error threshold parameter ϵ so that the algorithm is allowed to miss occurrences with probability ϵ. This is articularly appropriate for this problem, as approximate searching is used to model many cases where exact answers are not mandatory. We show that the relaxed version of the problem allows us breaking the average-case optimal lower bound of the classical problem, achieving average case O(nlog_σ m/m) time with any ϵ=poly(k/m), where n is the text size,m the pattern length, k the number of differences for edit distance, and σ the alphabet size. Our experimental results show the practicality of this novel and promising research direction. Finally, we extend the proposed approach to the multiple approximate string matching setting, where the approximate occurrence of r patterns are simultaneously sought. Again, we can break the average-case optimal lower bound of the classical problem, achieving average case O(n log_σ(rm)/m) time with any ϵ=poly(k/m).  
  • Marcos Kiwi, Gonzalo Navarro and Claudio Telha, On-line approximate string matching with bounded errors, Theoretical Computer Science 412 (2011), 6359–6370
  • 18.01.2012,
    Jonasz Pamuła
    1-Local 7/5-Competitive Algorithm for Multicoloring Hexagonal Graphs
    In the frequency allocation problem, we are given a cellular telephone network whose geographical coverage area is divided into cells, where phone calls are serviced by frequencies assigned to them, so that none of the pairs of calls emanating from the same or neighboring cells is assigned the same frequency. The problem is to use the frequencies efficiently, i.e. minimize the span of frequencies used. The frequency allocation problem can be regarded as a multicoloring problem on a weighted hexagonal graph, where every vertex knows its position in the graph. We present a 1-local 7/5-competitive distributed algorithm for multicoloring a hexagonal graph, thereby improving the previous 1-local 17/12-competitive algorithm.  
  • Petra Šparl, Rafał Witkowski, Janez Žerovnik, 1-Local 7/5-Competitive Algorithm for Multicoloring Hexagonal Graphs, Algorithmica, DOI 10.1007/s00453-011-9562-x
  • 11.01.2012,
    Paweł Wanat
    Exact Algorithms for Edge Domination
    An edge dominating set in a graph G=(V,E) is a subset of the edges D⊆E such that every edge in E is adjacent or equal to some edge in D. The problem of finding an edge dominating set of minimum cardinality is NP-hard. We present a faster exact exponential time algorithm for this problem. Our algorithm uses O(1.3226^n) time and polynomial space. The algorithm combines an enumeration approach of minimal vertex covers in the input graph with the branch and reduce paradigm. Its time bound is obtained using the measure and conquer technique. The algorithm is obtained by starting with a slower algorithm which is refined stepwisely. In each of these refinement steps, the worst cases in the measure and conquer analysis of the current algorithm are reconsidered and a new branching strategy is proposed on one of these worst cases. In this way a series of algorithms appears, each one slightly faster than the previous one, ending in the O(1.3226^n) time algorithm. For each algorithm in the series, we also give a lower bound on its running time. We also show that the related problems: minimum weight edge dominating set, minimum maximal matching and minimum weight maximal matching can be solved in O(1.3226^n) time and polynomial space using modifications of the algorithm for edge dominating set. In addition, we consider the matrix dominating set problem which we solve in O(1.3226^{n+m}) time and polynomial space for n×m matrices, and the parametrised minimum weight maximal matching problem for which we obtain an O∗(2.4179^k) time and space algorithm.  
  • Johan M.M. van Rooij, Hans L. Bodlaender, Exact Algorithms for Edge Domination, Algorithmica, DOI 10.1007/s00453-011-9546-x
  • 04.01.2012,
    Paweł Komosa
    An Improved FPT Algorithm and a Quadratic Kernel for Pathwidth One Vertex Deletion
    The PATHWIDTH ONE VERTEX DELETION (POVD) problem asks whether, given an undirected graph G and an integer k, one can delete at most k vertices from G so that the remaining graph has pathwidth at most 1. The question can be considered as a natural variation of the extensively studied FEEDBACK VERTEX SET (FVS) problem, where the deletion of at most k vertices has to result in the remaining graph having treewidth at most 1 (i.e., being a forest). Recently Philip et al. (WG, Lecture Notes in Computer Science, vol. 6410, pp. 196–207, 2010) initiated the study of the parameterized complexity of POVD, showing a quartic kernel and an algorithm which runs in time 7^k·n^{O(1)}. In this article we improve these results by showing a quadratic kernel and an algorithm with time complexity 4.65^k·n^{O(1)}, thus obtaining almost tight kernelization bounds when compared to the general result of Dell and van Melkebeek (STOC, pp. 251–260, ACM, New York, 2010). Techniques used in the kernelization are based on the quadratic kernel for FVS, due to Thomassé (ACM Trans. Algorithms 6(2), 2010).  
  • Marek Cygan, Marcin Pilipczuk, Michał Pilipczuk and Jakub Onufry Wojtaszczyk, An Improved FPT Algorithm and a Quadratic Kernel for Pathwidth One Vertex Deletion, Algorithmica DOI 10.1007/s00453-011-9578-2
  • 21.12.2011,
    Dominik Dudzik
    Exact Algorithms for Finding Longest Cycles in Claw-Free Graphs
    The HAMILTONIAN CYCLE problem is the problem of deciding whether an n-vertex graph G has a cycle passing through all vertices of G. This problem is a classic NP-complete problem. Finding an exact algorithm that solves it in O*(α^n) time for some constant α<2 was a notorious open problem until very recently, when Björklund presented a randomized algorithm that uses O*(1.657^n) time and polynomial space. The LONGEST CYCLE problem, in which the task is to find a cycle of maximum length, is a natural generalization of the HAMILTONIAN CYCLE problem. For a claw-free graph G, finding a longest cycle is equivalent to finding a closed trail (i.e., a connected even subgraph, possibly consisting of a single vertex) that dominates the largest number of edges of some associated graph H. Using this translation we obtain two deterministic algorithms that solve the LONGEST CYCLE problem, and consequently the HAMILTONIAN CYCLE problem, for claw-free graphs: one algorithm that uses O*(1.6818^n) time and exponential space, and one algorithm that uses O*(1.8878^n) time and polynomial space.  
  • H.J. Broersma, F.V. Fomin, P. van 't Hof and D. Paulusma, Exact algorithms for finding longest cycles in claw-free graphs, Algorithmica, DOI 10.1007/s00453-011-9576-4
  • 07.12.2011,
    Wojciech Bukowicki
    Bipartite Matching in the Semi-streaming Model
    We present the first deterministic 1+ε approximation algorithm for finding a large matching in a bipartite graph in the semi-streaming model which requires only O((1/ε)^5) passes over the input stream. In this model, the input graph G=(V,E) is given as a stream of its edges in some arbitrary order, and storage of the algorithm is bounded by O(n polylog n) bits, where n=|V|. The only previously known arbitrarily good approximation for general graphs is achieved by the randomized algorithm of McGregor (Proceedings of the International Workshop on Approximation Algorithms for Combinatorial Optimization Problems and Randomization and Computation, Berkeley, CA, USA, pp. 170–181, 2005), which uses Ω((1/ε)^{1/ε}) passes. We show that even for bipartite graphs, McGregor’s algorithm needs Ω(1/ε)^{Ω(1/ε)} passes, thus it is necessarily exponential in the approximation parameter. The design as well as the analysis of our algorithm require the introduction of some new techniques. A novelty of our algorithm is a new deterministic assignment of matching edges to augmenting paths which is responsible for the complexity reduction, and gets rid of randomization.

    We repeatedly grow an initial matching using augmenting paths up to a length of 2k+1 for k=2/ε. We terminate when the number of augmenting paths found in one iteration falls below a certain threshold also depending on k, that guarantees a 1+ε approximation. The main challenge is to find those augmenting paths without requiring an excessive number of passes. In each iteration, using multiple passes, we grow a set of alternating paths in parallel, considering each edge as a possible extension as it comes along in the stream. Backtracking is used on paths that fail to grow any further. Crucial are the so-called position limits: when a matching edge is the i-th matching edge in a path and it is then removed by backtracking, it will only be inserted into a path again at a position strictly lesser than i. This rule strikes a balance between terminating quickly on the one hand and giving the procedure enough freedom on the other hand.  

  • Sebastian Eggert, Lasse Kliemann, Peter Munstermann, Anand Srivastav, Bipartite Matching in the Semi-streaming Model, Algorithmica, DOI 10.1007/s00453-011-9556-8
  • 30.11.2011,
    Michał Feret
    Guard games on graphs
    A team of mobile agents, called guards, tries to keep an intruder out of an assigned area by blocking all possible attacks. In a graph model for this setting, the guards and the intruder are located on the vertices of a graph, and they move from node to node via connecting edges. The area protected by the guards is an induced subgraph of the given graph. We investigate the algorithmic aspects of the guarding problem, which is to find the minimum number of guards sufficient to patrol the area. We show that the guarding problem is PSPACE-hard and provide a set of approximation algorithms. All approximation algorithms are based on the study of a variant of the game where the intruder must reach the guarded area in a single step in order to win. This variant of the game appears to be a 2-approximation for the guarding problem, and for graphs without cycles of length 5 the minimum number of required guards in both games coincides. We give a polynomial time algorithm for solving the one-step guarding problem in graphs of bounded treewidth, and complement this result by showing that the problem is W[1]-hard parameterized by the treewidth of the input graph. We also show that the problem is fixed parameter tractable (FPT) parameterized by the treewidth and maximum degree of the input graph. Finally, we turn our attention to a large class of sparse graphs, including planar graphs and graphs of bounded genus, namely apex-minor-free graphs. We prove that the one-step guarding problem is FPT and possess a PTAS on apex-minor-free graphs.  
  • Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, Guard games on graphs: Keep the intruder out! , Theoretical Computer Science 412 (2011), 6484–6497
  • 23.11.2011,
    Albert Łącki
    Almost Exact Matchings
    In the exact matching problem we are given a graph G, some of whose edges are colored red, and a positive integer k. The goal is to determine if G has a perfect matching, exactly k edges of which are red. More generally if the matching number of G is m = m(G), the goal is to find a matching with m edges, exactly k edges of which are red, or determine that no such matching exists. This problem is one of the few remaining problems that have efficient randomized algorithms (in fact, this problem is in RNC), but for which no polynomial time deterministic algorithm is known. The first result shows that, in a sense, this problem is as close to being in P as one can get. We give a polynomial time deterministic algorithm that either correctly decides that no maximum matching has exactly k red edges, or exhibits a matching with m(G)−1 edges having exactly k red edges. Hence, the additive error is one. We also present an efficient algorithm for the exact matching problem in families of graphs for which this problem is known to be tractable.We show how to count the number of exact perfect matchings in K_{3,3}-minor free graphs (these include all planar graphs as well as many others) in O(n^{3.19}) worst case time. Our algorithm can also count the number of perfect matchings in K_{3,3}-minor free graphs in O(n^{2.19}) time.  
  • Raphael Yuster, Almost Exact Matchings, Algorithmica, DOI 10.1007/s00453-011-9519-0
  • 16.11.2011,
    Lech Duraj & Grzegorz Herman
    Garbage collection by reference counting the strongly connected components
    Traditional methods of automatic collection of unreachable objects (garbage) employ reference counting and/or graph traversal. The disadvantage of the former is its inherent inability to collect cyclic garbage structures (to deal with those, a supplemental, traversal-based method is often used). The latter suffers from having to traverse (at least periodically) all reachable objects. Heuristics based on the generational hypothesis lower this overhead, but only at the cost of failing to provide strong bounds on when garbage is going to be collected or how much total memory will be used. We propose a different approach, based on counting references between (approximations of) the strongly connected components of the object graph. Our method is real-time (has a constant time bound on any single operation), and concurrent (allows to interleave collection with program's activity). It provides an arbitrarily strong bound on memory usage. It makes use of the generational hypothesis, avoiding the traversal of objects permanently reachable by "old enough" edges. It uses constant memory per managed object plus constant global memory, and employs no data structures more complex than a stack, which gives hope that it can be performed in a lock-free manner.  
    Michał Staromiejski
    When are finite rings indecomposable?
    The main goal of the talk is to present a simple characterization of local (a.k.a. indecomposable) finite algebras over (finite) fields. The characterization leads to polynomial-time algorithm for testing locality of such algebras and, in turn, to polynomial-time locality test for arbitrary finite rings. Generalization for algebras over arbitrary fields and related open questions will be discussed.  
    Leszek Horwath
    Asymptotically Optimal Randomized Rumor Spreading
    New protocol solving the fundamental problem of disseminating a piece of information to all members of a group of n players.  
    Dominik Dudzik
    Kamil Kraszewski
    A New Upper Bound for Max-2-SAT: A Graph-Theoretic Approach
    In MaxSat, we ask for an assignment which satisfies the maximum number of clauses for a boolean formula in CNF. We present an algorithm yielding a run time upper bound of O*(2^(K/6.2158)) for Max-2-SAT (each clause contains at most 2 literals), where K is the number of clauses. The run time has been achieved by using heuristic priorities on the choice of the variable on which we branch. The implementation of these heuristic priorities is rather simple, though they have a significant effect on the run time. Also the analysis uses a non-standard measure time.  
  • D. Raible, H. Fernau. A new upper bound for max-2-sat: A graph-theoretic approach. MFCS, LNCS 5162, 551–562, 2008
  • 25.05.2011,
    Szymon Borak
    Marek Wróbel
    Design and analysis of online batching systems
  • Regant Y.s. Hung, Hing-Fung Ting, Design and analysis of online batching systems, Algorithmica (2010), 57: 217--231
  • 11.05.2011,
    Jonasz Pamuła
    Michał Kukieła
    Some combinatorial approaches to homotopy
    Different notions of "homotopy" equivalences of partially ordered sets may be defined in terms of various one-point reductions and expansions. These have been a recent object of study of J.A. Barmak and G.E. Minian. Their research was inspired by results from the 60's of R.E. Stong, who classified, using elementary "deformations", the homotopy types of finite topological spaces.

    Finite spaces satisfying the T0 separability axiom may be easily identified with partially ordered sets, and the deformations of Stong turn out to be dismantlings by irreducible points. Some, natural from a topologist's point of view, generalizations of irreducible points give interesting definitions of "homotopy".

    I will present relations between these notions and their connections to topics such as poset fixed point theory, evasiveness and homotopy theory of polyhedra.  

    Piotr Wójcik
    An Approximation Algorithm for Binary Searching in Trees
    We consider the problem of computing efficent strategies for searching in trees. As a generalization of the classical binary search for ordered lists, suppose one wishes to find a (unknown) specific node of tree by asking queries to its arcs, where each query indicates the endpoint closer to the desired node. Given the likelihood of each node being the one searched, the objective is to compute a search strategy that minimizes the expected number of queries. Practical applications of this problem include file system synchronization and software testing. Here we present a linear time algorithm which is the first constant factor approximation for this problem. This represents a significant improvement over previous O(log n) approximation.  
  • Eduardo Laber and Marco Molinaro, An Approximation Algorithm for Binary Searching in Trees, Algorithmica, 59(2010), 601-620
  • 20.04.2011,
    Piotr Szafruga
    Greedy Remote-Clique Algorithm
  • B.Birnbaum and K.J.Goldman, An Improved Analysis for a Greedy Remote-Clique Algorithm Using Factor-Revealing LPs
  • 13.04.2011,
    Grzegorz Guśpiel
    A Constant Space, Subquadratic Algorithm for Inverse Permutation
    We assume the permutation is given by an array in which the i-th element denotes the value at i. Finding its inverse can be achieved in linear time with a simple cycle-based algorithm. Limiting the numbers that can be stored in our array to the range of the permutation still allows a simple O(n^2) solution. A better O(n^{3/2}) algorithm will be presented.  
    Marek Grabowski
    Computing steady state of Markov Chain by combinatorial aggregation
    Probabilistic model checking is receiving quite a lot of attention around the world recently (i.e. DARPA is funding PRISMATIC project). Unlike 'normal' model checking which was research for nearly 30 years, probabilistic model checking is still a young discipline.

    Most common framework for modeling probabilistic processes are Markov Chains, both discrete and continuos time and Markov Decision Processes. One of interesting questions one can ask about DTMCs and CTMCs is 'to what distribution given chain converges' (what's the steady state of it).

    Theory of Markov Chains has over 100 years now and analytic solutions of all interesting questions are well known. Problem with such solutions is that they're usually untraceable for real-life models, because of their size. This is the reason why iterative methods are most commonly used. Unfortunately they also fail for some examples.

    I'll show an algorithm which was proposed by Pokarowski in his PhD thesis and implemented by me just recently, which for some class of models gives huge speedup in return for some precision. I'll describe theory behind this algorithm, show how it works in general and some test results. I'll also tell what modifications are on the way and what we hope to achieve in the end.  

    Maciej Wawro
    Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs
    Motivated by applications in batch scheduling of jobs in manufacturing systems and distributed computing, we study two related problems. Given is a set of jobs {J1, . . . , Jn}, where Jj has the processing time pj , and an undirected intersection graph G = ({1, 2, . . . , n}, E), with an edge (i, j) whenever the pair of jobs Ji and Jj cannot be processed in the same batch. We are to schedule the jobs in batches, where each batch completes its processing when the last job in the batch completes execution. The goal is to minimize the sum of job completion times. Our two problems differ in the definition of completion time of a job within a given batch. In the first variant, a job completes its execution when its batch is completed, whereas in the second variant, a job completes execution when its own processing is completed.

    For the first variant, we show that an adaptation of the greedy set cover algorithm gives a 4-approximation for perfect graphs. For the second variant, we give new or improved approximations for a number of different classes of graphs. The algorithms are of widely different genres (LP, greedy, subgraph covering), yet they curiously share a common feature in their use of randomized geometric partitioning.  

  • Leah Epstein, Magnus M. Halldórsson, Asaf Levin, Hadas Shachnai, Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs , Algorithmica 55(2009) 643-665
  • 09.03.2011,
    Andrzej Grzesik
    Maximum number of pentagons in triangle free graphs
    Kasper Kopeć
    Fast index for approximate string matching
    Michał Feret
    Fast 3-coloring triangle free planar graphs
    Grzegorz Gutowski
    Mr Paint and Mrs Correct go fractional
    Adam Zydroń
    Improved Parameterized Set Splitting Algorithms: A Probabilistic Approach
    Przemysław Derengowski
    A best online algorithm for scheduling on two parallel batch machines.
    Michał Handzlik
    A fully dynamic graph algorithm for recognizing proper interval graphs
    Marcin Witkowski
    Load balancing games
    Load balancing games are the following kind of problems. Assume we are given M machines and N jobs. Each job i is associated with a vector p=(p_1,..,p_m), where p_j is the processing time of this job on machine j. Players correspond to jobs. The strategy set of each player is the set of machines. Given a strategy for each player, the total load on each machine is the sum of processing times of the jobs that chose that machine. The aim of each player is to minimize the total load on its chosen machine. We, as an external observer, are interested in minimizing the total load on the most-loaded machine.
    We call a Nash Equilibriun (NE), a strategy profile (vector consist of choices of each player) that is resilient to unilateral deviations, which means that no player has anything to gain by changing only his or her own strategy unilaterally. A downside of NE is that it is not necessarily stable against deviations by coalitions of players. A pure Nash equilibrium which is resilient to deviations by coalitions is called a strong equilibrium (SE). Using a framework introduced by Feldman and Tamir [1] I estimate how close a NE is to SE in certain load balancing games.  
  • [1] M. Feldman and T. Tamir. ,,Approximate strong equilibrium in job scheduling games". In SAGT, pages 58-69, 2008.
  • 24.11.2010,
    Jakub Kozik
    Secretary problem on partial orders.
    n secretaries apply for a single secretary position. All the candidates are partially ordered according to their competences. They are interviewed one by one in a random order. After each interview we know only the induced partial order on the secretaries interviewed so far. The decision whether to accept or reject a candidate must be made just after the interview. The objective is to choose one of the maximal candidates.
    We are going to present some recent result in the search for optimal strategy, with special emphasis on the result by R. Freji and J. Wästlund whose strategy achieves probability of success 1/e on every partial order.  
  • Freij, Ragnar; Wästlund, Johan; Partially ordered secretaries; Electronic Communications in Probability; Vol. 15 (2010) paper 45, pages 504-507.
  • 10.11.2010,
    Bartosz Walczak
    An extremal problem for crossing vectors
    Two vectors u,v ∈ Zw are called crossing if there are two coordinates i,j such that uivi ≥ 1 and vjuj ≥ 1. They are called k-crossing if there are two coordinates i,j such that uivi ≥ k and vjuj ≥ k. We consider the following question: what is the maximum size of a family of pairwise crossing but not k-crossing vectors in Zw? Several (totally different) constructions of such families of size kw−1 are known. The conjecture is that kw−1 is optimal.

    The problem has been posed by Felsner and Krawczyk in February 2010. Since then several groups have been trying hard to solve it, with only little success. The conjecture is trivial for w=1,2. I will present the proof for w=3. It is not clear whether it brings us closer to the proof of the general conjecture, but it seems promising. For w≥4 the question is open. Solving the problem for general w might give us some new insights into posets and Sperner theory.  

    Grzegorz Matecki
    First-Fit coloring of co-comparability graphs
    One of the simplest heuristics to obtain a proper coloring of a graph is First-Fit algorithm. First-Fit visits each vertex of a graph in the already given order and assigns to it the smallest possible number (color) such that two vertices connected by an edge are not monochromatic. The class of 2-colorable co-comparability graphs is known to be infinitely colorable by First-Fit. We proved that H-free co-comparability graphs with a fixed chromatic number are finitely colorable by First-Fit if and only if H is a 2-colorable co-comparability graph. It provides the full characterization of First-Fit on co-comparability graphs in terms of forbidden structures.

    This is a joint work with Bartłomiej Bosek and Tomasz Krawczyk.  

    Marek Adrian
    Contributions of Endre Szemerédi in theoretical computer science
    This talk is based on one given by Avi Wigderson presented on a conference honoring of the 70th birthday of Endre Szemerédi. Out of several results we will look closely into three proofs Szemerédi participated in. At first we will check the Dictionary Problem. We want to store a set U = {u1, … , un} subset 2k (n<<2k) using O(n) time & space. The question is how to minimize the number of queries to determine if x is in U. Next we shall look at Sorting Network where one using O(nlogn) comparators has been explicitly given. Finally we shall compare deterministic and non deterministic algorithms in linear time and their impact on k-paged graphs.  
    Torsten Ueckerdt
    Technical University Berlin
    Intersection graphs of gridpaths
    We are considering so called EPG representations of simple graphs, that is every vertex is modeled by a path in the plane square grid, such that the paths of two vertices have a grid-edge in common iff the two vertices are adjacent. The bend-number b(G) of a graph G is the minimal number k, such that G has an EPG representation with each path having no more than k bends. The bend-number is related to a graph's interval-number and track-number. For certain graph classes (planar graphs, complete bipartite graphs, graphs with maximum degree D, ...) we are now interested in the maximum interval-, track-, and bend-number among all graphs in the class. We settle some answers but still are left with many open questions.  
    Tomasz Krawczyk
    On-line dimension for posets excluding two long chains
    Maciej Wawro
    Weighted Sum Coloring in Batch Scheduling of Conflicting Jobs
    Mateusz Drewienkowski
    Priority algorithms for graph optimization problems
    Przemysław Gordinowicz
    TU Łódź
    On graphs isomorphic to their neighbour and non-neighbour sets
    The talk describes a construction of a universal countable graph, different from the Rado graph, such that for any of its vertices both the neighbourhood and the non-neighbourhood induce subgraphs isomorphic to the whole graph. This solves an open problem posed by A. Bonato at 18th BCC
    Szymon Borak
    Optimal algorithms for the path/tree-shaped facility location problems in trees
    Kasper Kopeć
    Finding paths between graph colorings: PSPACE-completeness
    Kamil Kraszewski
    Preemptive online scheduling: optimal algorithms for all speeds
    Piotr Micek
    Algorithmic version of the Lovász Local Lemma
    We will study the new approach of Robin Moser to give an algorithm for the Lovász Local Lemma. Most likely, we will stick to symmetric case.  
  • R. A. Moser, G. Tardos - A constructive proof of the general Lovász Local Lemma
  • 20.01.2010,
    Grzegorz Herman
    Unambiguous Functions in Logarithmic Space
    Grzegorz Matecki
    On-line matching on bipartite graphs
    We consider bipartite matching in the on-line version as follows. There is a bipartite graph G=(U,V,E), in which vertices in V are given a priori and each vertex u in U arrives in the on-line fashion (with all its incident edges). An on-line algorithm matches each such vertex u to a previously unmatched adjacent vertex in V, if there is one. Decisions made by the algorithm are irrevocable. The objective is to maximize the size of the resulting matching. It is easy to observe that any greedy algorithm (never leave vertex u unmatched if a match is possible) matches at least n/2 edges where n is the size of the optimal matching with G given at once. This number is optimal and there is no better algorithm.

    We propose the following modification of an on-line matching. The algorithm matches each incoming vertex u in U to a set S(u) of adjacent vertices in V (instead of one vertex). In case when S(u) and S(x) for already existing x in U are not disjoint the algorithm must remove all common vertices from S(x). Additionally, the algorithm has to obey the rule: each S(x) must not become empty if only it was initialized with a nonempty set of vertices. An algorithm satisfying the above condition is called adaptive. In this approach a vertex u in U can be always matched to a vertex from S(u) (if S(u) is not empty). Therefore, the number of matched edges is equal to the number of nonempty sets S(u).

    We are going to present the optimal adaptive on-line algorithm which breaks n/2 barrier and matches at least 0.589n+o(n) edges.  

    Radosław Kożuch
    An O(m^2 n) Alghorithm for Minimum Cycle Bases of Graph
    Cycles in graphs play an important role in many applications, e.g., analysis of electrical networks, analysis of chemical and biological pathways, periodic scheduling, and graph drawing. Cycle bases are a compact description of the set of all cycles of a graph and cycle bases consisting of short cycles or, in weighted graphs, of small weight cycles are preferable. I will present an algorithm for computing minimum cycle basis in an undirected graph in time O(E^2*V + E*V^2*log V).  
  • Telikepalli Kavitha, Kurt Mehlhorn, Dimitrios Michail, Katarzyna E. Paluch, An O(m^2 n) Algorithm for Minimum Cycle Basis of Graphs, Algorithmica (2008) 52: 333–349
  • 09.12.2009,
    Bartłomiej Bosek, Tomasz Krawczyk
    Subexponential algorithm for on-line chain partitioning problem (cont.)
    Grzegorz Matecki
    Golden ratio in on-line chain partitioning problem
    Consider a game with two players. The first one, called Spoiler, reveals points of an order, one at a time. The second one, called Algoritm, assigns each new point to some chain and so maintains a chain partition of an incoming order. The aim of Algoritm is to use the smallest number of chains as possible. Whereas Spoiler tries to force Algoritm to use as much as possible different chains.

    The talk is on a variant of this game where Spoiler reveals a semi-order and each new point is maximal at a time it arrives.

    We present a strategy for Algoritm using at most gw chains where g is a golden ratio (g=1,618..) and w is optimal (off-line) number of chains. Moreover, we prove that it is best possible. Our strategy somehow induces a system of linear inequalities incorporating w (off-line optimum) as well as the number of chains used by Algoritm. The solution of this system shows how the golden ratio is involved in decisions made by Algoritm.  
  • Stefan Felsner, Kamil Kloch, Grzegorz Matecki and Piotr Micek, On-line chain partitioning of up-growing semi-orders, submitted
  • 25.11.2009,
    Andrei Krokhin
    Durham University
    On the hardness of losing weight
    Local search algorithms iteratively improve solutions by checking whether it is possible to find a better solution in the local neighborhood of the current solution. The local neighborhood is usually defined as the set of solutions that can be obtained by one (or more generally, at most k for some fixed k) elementary changes. Large values of k can give better results; however, the brute force search of the local neighborhood is not feasible for larger k. Parameterized complexity gives a convenient framework for studying the question whether there is an efficient way of searching the local neighborhood. In the talk, I will briefly overview parameterized complexity, summarize recent results in this direction, and explain in more detail the analysis of the problem of finding minimum weight solutions for Boolean CSP.
    (Joint work with Dániel Marx)  
    Bartłomiej Bosek, Tomasz Krawczyk
    Subexponential algorithm for on-line chain partitioning problem
    An on-line chain partitioning algorithm receives the elements of a poset, point by point, from some externally determined list. When a new point is presented the algorithm learns its comparability status with previously presented points and makes an irrevocable choice of a color, keeping the invariant that all points with the same color form a chain. The choice of a color is made without knowledge of future points. The number of colors used by an on-line algorithm is usually compared to the width w of the poset. Kierstead showed that there exists an on-line algorithm that covers any poset with (5^w-1)/4 chains. On the other hand Szemeredi proved that any on-line algorithm for the on-line chain partitioning problem has to use at least (w^2+w)/2 colors. We reduce the huge gap between the exponential upper bound and the polynomial lower bound by improving the upper bound to the subexponential function: in fact we show an on-line chain partitioning algorithm that uses at most w^O(log w) many colors.  
    Piotr Micek,
    Bartosz Walczak
    Graph eating games
    Alice and Bob share a connected graph. Its vertices are weighted with non-negative values summing up to one. The players eat the vertices alternately one by one (starting with Alice) until no vertex is left. The rule they have to obey is that after each move the vertices eaten so far form a connected subgraph of the original graph. Both players want to maximize their final gain, i.e., the total weight of the vertices they have eaten. This game for a cycle is known as the pizza eating problem. Recently, Knauer, Micek and Ueckerdt proved that Alice can eat 4/9 of any cycle (pizza), which is best possible and settles the conjecture of Peter Winkler.

    In the general game, Alice cannot guarantee herself any positive constant gain on all connected graphs. Curiously, the parity of the number of vertices makes a difference. Examples of graphs with small Alice's gain having an odd number of vertices need a very rich structure, contrary to strikingly simple examples with an even number of vertices. In particular, there are trees with an even number of vertices which are very bad for Alice, while she can guarantee herself a positive constant gain on all odd trees.

    We wish to introduce the audience to this and similar games on graphs. Our techniques are quite general and seem to be applicable to other combinatorial games as well.  
    Tomasz Jurkiewicz
    Breaking through the O(m^2 n) Barrier for Minimum Cycle Bases
    Cycles in graphs play an important role in many applications, e.g., analysis of electrical networks, analysis of chemical and biological pathways, periodic scheduling, and graph drawing. Cycle bases are a compact description of the set of all cycles of a graph and cycle bases consisting of short cycles or, in weighted graphs, of small weight cycles are preferable. We will present an algorithm for computing general minimum weight cycle bases in time O(E^omega) for general graphs; here V and E denote the number of nodes and edges, respectively, and omega is the exponent of the fastest matrix multiplication algorithm. Our algorithm is the first to run faster than Otilde(E^2 V). A key to our improved running time is the insight that the search for the minimum basis can be restricted to a set of candidate cycles of total length O(V E).  
  • Edoardo Amaldi, Claudio Iuliano, Tomasz Jurkiewicz, Kurt Mehlhorn and Romeo Rizzi. Breaking the O(m^2 n) Barrier for Minimum Cycle Basis, ESA 2009
  • 16.09.2009,
    Lê Đại Trí Mẫn
    University of Toronto
    An introduction to generalized combined traces
    Mazurkiewicz traces were introduced by A. Mazurkiewicz in 1977 as a language representation of partial orders to model "true concurrency". The theory of Mazurkiewicz traces has been utilized to tackle not only various aspects of concurrency theory but also problems from other areas, including combinatorics, graph theory, algebra, and logic.

    However, neither Mazurkiewicz traces nor partial orders can effectively model more complex relationships, e.g., "not later than". In this talk, I will introduce the theory of generalized combined traces (generalized comtraces). Generalized comtraces are generalizations of Mazurkiewicz traces, where generalized stratified order structures are used instead of partial orders. This allows us to model even the most general case of concurrent behaviors under the assumption that observations are stratified orders. The theory of generalized comtraces also provides us a wide variety of new and interesting problems to work on.  

  • R. Janicki and D.T.M. Le, Modelling Concurrency with Comtraces and Generalized Comtraces, preprint can be found at
  • 10.06.2009,
    Michal Handzlik
    Online unit clustering
    Online unit clustering is a clustering problem where classification of points is done in an online fashion, but the exact location of clusters can be modified dynamically. We study several variants and generalizations of online unit clustering problem, which are inspired by variants of packing and scheduling problems in the literature  
    Michał Wrona
    Wrocław University
    Quantified Positive Temporal Constraints
    A quantified constraint satisfaction problem (qcsp) is a version of a constraint satisfaction problem (csp) where variables occurring in an input formula can be not only existentially but also universally quantified. We say that a relation is temporal positive if it has a positive first order definition over the order of rational numbers. Our main contribution is a complexity characterization of qcsp(L) for all finite sets of positive temporal relations L. The complexity of these problems varies. Some of them are in LOGSPACE, some are NLOGSPACE-complete, P-complete, NP-complete, or PSPACE-complete.  
    Andrzej Ruciński
    UAM Poznań
    2nd Discrete Integration Meeting & SSAK
    Campus AGH, building B7, room 1.9, Thursday, May 28, at 16:00 (sharp)  
    Sylwia Antoniuk
    Efficient on-line repetition detection
    A repetition is a nonempty string of the form X^q, where q >= 2. Given a string S character by character and the value of q, the on-line repetition detection problem is to detect and report the first repetition in S, if it exists, in an on-line manner. Leung, Peng and Ting first studied the problem for q=2 and gave an O(m log^2 m) time algorithm, where m is the ending position of the first repetition in S. We improve the above work by reducing the time complexity to O(m log B), where B is the number of distinct characters in the first m characters of S. Moreover, we also solve the problem for q >= 3 with the same time complexity.  
    Andrzej Kukier
    Similarity Search in High Dimensions via Hashing
    The nearest- or near-neighor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search/index structures for performing similarity search over high-dimensional data, e.g. image databases, document collections, time-series databases and genome databases. Unfortunately, all known techniques for solving this problem fall prey of the "curse of dimensionality". That is, the data structures scale poorly with data dimensionality: in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related data structures involves the inspection of a large fraction of the database, therby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determinig an approximate nearest neighbor should suffice for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search bases on hashing. The basic idea is to hash the points for the database so as to ensure that the probability of a collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives significant improvement in running time over other methods for searching in high-dimensional spaces based on hierachical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).  
    Piotr Cieślik
    Edge-Coloring Bipartite Multigraphs in O(E log D) Time
    For V, E and D denoting the cardinality of the vertex set, the cardinality of the edge set, and the maximum degree of a bipartite multigraph G, it is shown that a minimal edge-coloring of G can be computed in O(E log D) time. This result follows from an algorithm for finding a matching in a regular bipartite graph in O(E) time.  
    Bartłomiej Bosek
    Posets omitting two incomparable chains of the same height
    We consider a problem of partitioning a poset into chains by First-Fit algorithm. In general this algorithm uses arbitrarily many chains on a class of bounded width posets. In this paper we prove that First-Fit uses at most $4tw^2$ chains to partition any poset of width $w$ which does not induce two incomparable chains of height $t$. In this way we get a wide class of posets with polynomial bound for the on-line chain partitioning problem. We discuss also some consequences of our result for coloring graphs by First-Fit.  
  • B. Bosek, T. Kraczyk and E. Szczypka, First-Fit algorithm for on-line chain partitioning problem, manuscript, 2009.
  • 22.04.2009,
    Tomasz Schoen
    Spectral gaps and sum-sets
    Kolja Knauer
    Chip-Firing, Antimatroids, and Polyhedra
    Starting from the chip-firing game of Björner and Lovász we consider a generalization to vector addition systems that still admit algebraic structures as sandpile group or sandpile monoid. Every such vector addition language yields an antimatroid. We show that conversely every antimatroid can be represented this way. The inclusion order on the feasible sets of an antimatroid is an upper locally distributive lattice. We characterize polyhedra, which carry an upper locally distributive structure and show that they can be modeled by chip-firing games with gains and losses. At the end we point out a connection to a membership problem discussed by Korte and Lovász.  
    Apoloniusz Tyszka
    A hypothetical upper bound for solutions of a Diophantine equation with a finite number of solutions
    Let E_n be the set of all equations of the form
    x_i = 1, or
    x_i + x_j = x_k, or
    x_i * x_j = x_k,
    where i,j,k range over {1,...,n}. Moreover let K be one of the rings Z,Q,R,C. We construct a system S of equations from E_{21} such that S has infinitely many integer solutions and S has no integer solution in the cube [-2^{2^{21-1}},2^{2^{21-1}}]^{21}. We conjecture that if a system S, contained in E_n, has a finite number of solutions in K, then each such solution (x_1,...,x_n) satisfies (|x_1|,...,|x_n|) \in [0,2^{2^{n-1}}]^n. Applying this conjecture for K=Z, we prove that if a Diophantine equation has only finitely many integer (rational) solutions, then these solutions can be algorithmically found. On the other hand, an affirmative answer to the famous open problem whether each listable subset M of Z^n has a finite-fold Diophantine representation would falsify our conjecture for K=Z.

    Full text:  
  • A. Kozlowski and A. Tyszka, A Conjecture of Apoloniusz Tyszka on the Addition of Rational Numbers, 2008,
  • Yu. Matiyasevich, Hilbert's tenth problem: what was done and what is to be done. Hilbert's tenth problem: relations with arithmetic and algebraic geometry (Ghent, 1999), 1-47, Contemp. Math. 270, Amer. Math. Soc., Providence, RI, 2000
  • A. Tyszka, A system of equations, SIAM Problems and Solutions (electronic only), Problem 07-006, 2007,
  • A. Tyszka, Some conjectures on addition and multiplication of complex (real) numbers, Int. Math. Forum 4 (2009), no. 9-12, 521-530,
  • A. Tyszka, Bounds of some real (complex) solution of a finite system of polynomial equations with rational coefficients,
  • 18.03.2009,
    Grzegorz Gutowski
    Multicommodity Max-Flow Min-Cut Theorems
    The results of two papers will be presented:

    T. Leighton, S. Rao "Multicommodity Max-Flow Min-Cut Theorems and Their Use in Designing Approximation Algorithms"

    Abstract: In this paper, we establish max-flow min-cut theorems for several important classes of multicommodity flow problems. In particular, we show that for any n-node multicommodity flow problem with uniform demands, the max-flow for the problem is within an O(log n) factor of the upper bound implied by the min-cut. The result (which is existentially optimal) establishes an important analogue of the famous 1-commodity max-flow min-cut theorem for problems with multiple commodities.

    O. Günlük "A New Min-Cut Max-Flow Ratio for Multicommodity Flows"

    Abstract: We present an improved bound on the min-cut max-flow ratio for multicommodity flow problems with specified demands. To obtain the numerator of this ratio, capacity of a cut is scaled by the demand that has to cross the cut. In the denominator, the maximum concurrent flow value is used. Out new bound is proportional to log(k*) where k* is the cardinality of the minimal vertex cover of the demand graph.  

    Rafał Józefowicz
    On-line interval coloring with packing constraints
    Lech Duraj
    Optimal orientation problem for different graph classes
    We consider the problem of giving direction to every edge of an undirected graph, such that the number of connected pairs of vertices is maximal. In an on-line variant, the vertices of graph are given one-by-one, and the algorithm's decisions are permanent. Despite the fact that off-line algorithm always gives an orientation with O(n^2) connected pairs, the optimal outcome of the on-line algorithm can vary from O(n) to O(n^2), depending of the additional rules imposed on the graph. I'll present several possible sets of rules with different strategies and outcomes.  
    Michał Morayne
    TU Wrocław
    How to choose the best twins
    We consider a variant of the secretary problem where each candidate has an identical twin that applies for the same job. We find both an optimal strategy how to choose one of the best twins and the probability of success as well as the assymptotics for this probability. (with the number of candidates tending to infinity).  
    Bartosz Walczak
    Fast route planning in road networks
    I will discuss the problem of finding an optimal route between two specified nodes in a transportation network (represented as a weighted graph). When the graph is very big (as real road networks are) and there are lots of queries, one cannot afford to run a simple Dijkstra search for each individual query. Some additional structure is necessary in order to be able to answer the queries efficiently. A natural idea is to exploit hierarchical structure of road networks: only "important" roads are worth considering far away from the source and destination. Several commercial route planning systems use the formal hierarchy (freeways, national highways etc.) for this purpose. However, formal hierarchies are not perfect, and the route found this way not be optimal. The modern approach is to compute an appropriate hierarchy from scratch in the preprocessing step, based on the bare graph. After that, each query can be quickly answered with an exact optimal solution. I will present several ways of achieving this goal.  
  • P. Sanders, D. Schultes, Engineering Fast Route Planning Algorithms, 2007.
  • P. Sanders, D. Schultes, Engineering Highway Hierarchies, 2006.
  • 17.12.2008,
    Sebastian Czerwiński University of Zielona Góra
    Short proof of Combinatorial Nullstellensatz
    Libor Barto Charles University, Prague
    Constraint Satisfaction Problems of Bounded Width
    We prove an algebraic characterization of applicability of the bounded width algorithm solving problem posted by Larose and Zadori.  
    Jan Jeżabek
    Resource Augmentation for Packet Switching with Agreeable Deadlines
    We study a scheduling problem known as on-line packet switching. We utilize a technique called resource augmentation, where an optimal off-line algorithm is compared against an on-line algorithm with more processing power, i.e. one that can transmit more than one packet per unit of time. A previous result showed that regardless of the processing power of the on-line algorithm there are instances on which it is outperformed by the off-line algorithm. We will show that if the jobs in the instance have aggreeable deadlines (i.e. for any jobs i,j the time interval where i is available is not contained in the interior of the time interval where j is available) a resource augmented on-line algorithm which executes two jobs per time slot will always perform at least as well as the optimal off-line algorithm.  
    Michał Staromiejski
    Computing isomorphisms between certain finite rings
    For a given finite field $F$ and two polynomials $f, g \in F[X]$, there is a polynomial-time algorithm deciding whether the (finite) rings $F[X]/(f)$ and $F[X]/(g)$ are isomorphic? However, a question about constructing an isomorphism provided the rings are isomorphic seems to be more challenging. In 1991, Lenstra showed that such an isomorphism can be computed in deterministic polynomial time if the polynomials $f$ and $g$ are irreducible over $F$. There is an obvious algorithm that can solve the general problem if we allow randomization. In the talk I will present partial results which may help to find a deterministic polynomial-time algorithm.  
    Piotr Micek
    How to eat 4/9 of a pizza
    Given two players alternately picking pieces of a pizza sliced by radial cuts, in such a way that after the first piece is taken every subsequent chosen piece is adjacent to some previously-taken, we provide a strategy for the starting player to get $\frac{4}{9}$ of the pizza. This is best possible and settles a conjecture of Peter Winkler.  
  • Kolja Knauer, Piotr Micek and Torsten Ueckerdt, How to eat 4/9 of a pizza, manuscript
  • 15.10.2008,
    Piotr Micek, Bartosz Walczak
    Summer conferences 2008
    Selected results and open problems from summer conferences are presented  
    Zbigniew Lonc
    Politechnika Warszawska
    Small transversals in hypergraphs
    Zbiór wierzchołków hipergrafu, który przecina wszystkie jego krawędzie nazywamy transwersalem. Klasyczny algorytm zachłanny znajdujący "mały" transwersal w hipergrafie wybiera w każdym kroku do transwersala wierzchołek należący do największej liczby krawędzi nie zawierających wierzchołków już wybranych. Analiza tego algorytmu (autorstwa Chvátala i McDiarmida) prowadzi do pewnych górnych ograniczeń na najmniejszą liczność transwersala w jednorodnym hipergrafie o zadanej liczbie wierzchołków i krawędzi. Referat będzie poświęcony pewnej modyfikacji tego algorytmu zachłannego. Jego analiza prowadzi do poprawienia ograniczeń Chvátala i McDiarmida. Rezultaty te wiążą się ze znanym kombinatorycznym problemem wyznaczenia tzw. liczb Turána. W szczególności implikują nowe dolne ograniczenia na liczby Turána w pewnych szczególnych przypadkach.  
    Oleg Pikhurko
    Carnegie Mellon University
    The Stability Method for the Hypergraph Turán Problem
    For a k-graph F, the Turan function ex(n,F) is the maximum size of a k-graph on n vertices not containing a copy of F. Although this function was introduced by Turan yet in 1941, very few non-trivial cases have been solved and there is an abundace of open problems. We survey some recent results, concentrating on the so-called stability approach that was used to obtain some of them.  
    Leszek Horwath
    Simple wildcard matching
    Brief review over wildcar matching algorithms. Introducting the new deterministic method of O(nlgm) complexity using Fast Fourier Transformation.  
    Jarosław Duda
    Combinatorial invariants for graph isomorphism problem
    Some topological invariants for finite graphs that can be calculated in a polynomial time are presented. They may be useful in recovering the graph up to isomorphism. At least we will see how much information they do code.  
    Przemysław Broniek
    Computational complexity of solving equation systems
    We consider the computational complexity of determining whether a system of equations over fixed algebra A has a solution. This leads to two problems, SysTermSat(A) and SysPolSat(A), in which equations are built out of terms or polynomials, respectively. We are interested in characterizing those algebras, for which SysPolSat can be solved in a polynomial time. The problem has been widely studied and is open in general. We prove that the Constraint Satisfaction Problem for relational structures is polynomially equivalent to SysTermSat over unary algebras. This gives that Dichotomy Conjecture for CSP is equivalent to Dichotomy Conjecture for SysTermSat over unary algebras. We also give other partial characterizations of computational complexity of SysTermSat(A), e.g. for algebras with generic operations taking only few values. This covers wide class of four-element unary algebras.  
    Jan Hązła
    Simplified O(n) planarity algorithm
    I will present O(n)-time methods for planar embedding and Kuratowski subgraph isolation that were inspired by the Booth-Lueker PQ-tree implementation of the Lempel-Even-Cederbaum vertex addition method. Instead of performing comprehensive tests of planarity conditions embedding the edges from a vertex to its descendants, we take the edge to be the fundamental unit of addition to the partial embedding while preserving planarity. This eliminates the batch planarity condition testing in favor of a few localized decisions of a path traversal process, and it exploits the fact that subgraphs can become biconnected by adding a single edge. The method is presented using only graph constructs, but the definition of external activity, path traversal process and theoretical analysis of correctness can be applied to optimize the Shih-Hsu PC-tree as well.  
  • John M. Boyer, Wendy J. Myrvold, On the Cutting Edge: Simplified O(n) Planarity by Edge Addition , Journal of Graph Algorithms and Applications 8 (3), 241-273 (2004)
  • 23.04.2008,
    Sebastian Czerwiński
    University of Zielona Gora
    On a conjecture of Brown, Graham, and Landman
    Maria Chmaj
    A linear-time algorithm for finding dominators in a flowgraph
    The problem of finding dominators in a flowgraph arises in many kinds of global code optimization and other settings. In 1979 Lengauer and Tarjan gave an almost-linear-time algorithm to find dominators. Several attempts at a linear-time algorithm were unsuccessful. I will talk about a linear-time algorithm which Georgiadis and Tarjan gave in 2004.  
  • L.Georgiadis, R.Tarjan, Finding dominators revisted, In Proceedings of the 15th ACM-SIAM Symposium on Discrete Algorithms (SODA 2004), 862-87
  • 09.04.2008,
    Tomasz Krawczyk
    Dimension on-line of interval orders
    Jarek Grytczuk
    Open problems
    Karol Kosinski
    Faster algorithms for finding lowest common ancestors in directed acyclic graphs
    We are given two new methods for finding a lowest common ancestor (LCA) for each pair of vertices of a directed acyclic graph (dag) on n vertices and m edges. The first method is surprisingly natural and solves the all-pairs LCA problem for the input dag in time O(n*m). The second method relies on a novel reduction of the all-pairs LCA problem to the problem of finding maximum witnesses for Boolean matrix product. The running time of the algorithm is O(n^2,575), so it improves the previously known O(n^2,688) time-bound for the general all-pairs LCA problem in dags by Bender, Pemmasani, Skiena and Sumazin.  
    Jaroslav Nesetril
    Charles University, Prague
    Dualities for structures
    Homomorphisms dualities are related to logic, model theory, partially ordered sets and of course to coloring of graphs. In this lecture we survey the recent development related to this notion.  
    Tomasz Bartnicki
    University of Zielona Gora
    Graph coloring with an uncooperative partner
    Jacek and Placek color the vertices of a graph G alternately using given set of colors C, and with Jacek going first. Placek agrees to cooperate with Jacek by respecting the rule of a proper coloring. However, for some reason he does not want the job to be completed - his secret aim is to achieve a bad partial coloring. Is it possible for Jacek to complete the coloring somehow, in spite of Placek's insidious plan? If not, then how many additional colors are needed to guarantee that the graph can be successfully colored, no matter how clever Placek is?


    Maciej Chociej
    Visibility graphs, binary space partitioning and hidden surface removal (in theory and applications)
    Jan Jeżabek
    Online Buffer Management - Increasing Machine Speed
    We consider the following problem: a network switch receives packets characterized by a deadline and a weight. In each step the switch can transmit a fixed number of packets (this number is called the speed of the machine). The goal of the machine is to maximize the sum of weights of the transmitted jobs. It is easily seen that any on-line algorithm is outperformed by an optimal off-line algorithm with the same speed on some instance. We will show that this is true even if the speed of the on-line algorithm is increased by an arbitrary factor with respect to the speed of the off-line algorithm.  
    Piotr Zieliński
    Computing a longest common increasing subsequence
    Computing a longest common increasing subsequence of two given sequence is classical problem in computer science. It can be solved in O(n*n*m) time and O(n*m) space using a simple dynamic programming technique. In 2004 I-Husan Yang proposed an O(n*m)-time algorithm based on the relationship between computing a longest common increasing subsequence and computing a longest common subsequence. Two years later, Yoshifumi Sakai improved the space complexity of Yang's algorithm and presented O(n+m)-space algorithm. I will talk about both algorithms, especially how to implement them efficiently.  
  • I-Hsuan Yang, A fast algorithm for computing a longest common increasing subsequence" , 2004
  • Yoshifumi Sakai, A linear space algorithm for computing a longest common increasing subsequence, 2006
  • 16.01.2008,
    Jiří Tůma
    Charles University, Prague
    Recovering ENIGMA, the cipher system
    Marek Chrobak
    University of California at Riverside
    Doubling technique in approximation algorithms
    This talk is based on an expository article by Claire Kenyon-Mathieu and myself, in which we show that there is a number of approximation algorithms in the literature that use essentially the same (but yet not explicitely identified) technique that we refer to as "doubling". The essence of this approach is to use exponentially increasing estimates on the optimal solution to design approximate solutions. It can be used both in offline and online approximation algorithms. Applications include several clustering, searching, facility location, and scheduling problems.  
    Alan Meller
    Arek Pawlik
    Optimal edge ranking of trees
    Libor Barto
    Eduard Čech Center, Prague
    The algebraic approach to Constraint Satisfaction Problem, recent progress
    Many decision problems in combinatorics, computer science, artificial intelligence, logic, etc. can be expressed as so called Constraint Satisfaction Problems (CSPs). For a fixed relational structure R, CSP(R) is the following decision problem:

    INPUT: A relational structure $S$ of the same signature as R

    OUTPUT: Is there a homomorphism ftom S into R

    The central problem in this area is the Dichotomy Conjecture of Feder and Vardi stating that, for any relational structure R, CSP(R) is either solvable in polynomial time or NP-complete. I will talk about the universal algebraic approach to this problem and mention some recent developments.


    Andrzej Pezarski
    On-line clique covering of proper interval graphs
    Lech Duraj, Grzegorz Gutowski
    Optimal orientation on-line
    Bartosz Walczak
    Algorithmic meta-theorems and treewidth
    Algorithmic meta-theorems are algorithmic results that apply to whole families of combinatorial problems, instead of just specific problems. These families are usually defined in terms of logic and graph theory. We focuse on the model checking problem, which is to decide whether a given graph satisfies a given formula. Some restrictions of this problem to specific classes of graphs (with bounded treewidth, excluded minors etc.) turn out to be fixed-parameter tractable (fpt). We define combinatorial notions of tree decomposition, treewidth and local treewidth, and prove that MSO model checking on graphs with bounded treewidth and FO model checking on graphs with bounded local treewidth are fpt. The latter result uses Gaifman's Locality Theorem, which in one of basic tools in finite model theory. The talks are based on a paper by Martin Grohe [4].  
  • [1] B. Courcelle. Graph rewriting: An algebraic and logic approach. Handbook of Theoretical Computer Science, volume B, pp. 194-242, 1990.
  • [2] J. Flum, M. Grohe. Fixed-parameter tractability, definability, and model checking. SIAM Journal on Computing, 31(1):113-145, 2001.
  • [3] H. Gaifman. On local and non-local properties. Proceedings of the Herband Symposium, Logic Colloquium `81, pp. 105-135, 1982.
  • [4] M. Grohe. Logic, graphs, and algorithms. 2007.
  • 03.10.2007,
    Jan Jeżabek
    ICALP, LICS, LCC 2007
    Selected results and open problems from ICALP, LICS, LCC 2007 are presented  
    Mateusz Kostanek
    On k-server problem
    The on-line k-server problem is set on a metric space inhabited by k servers. Initially, each sever is positioned at some point of the space. Over time, request arrive for service at points of the space. Immediately after a request at some point q comes in, a server must be moved to q in order to serve the request. When a server moves it incurs a cost equal to the distance it covers. Our goal is to design on-line algorithm which will decide which server to move when a request arrives so that any sequence of request can be served with cost as small as possible. In this talk we will present the work function algorithm for the k-server problem and prove that it has competitive ratio at most 2k-1  
  • M. S. Manase, L. A. McGeoch, And D. D. Sleator: Competitive algorithms for on-line problems
  • E. Koutsoupias, C. Papadimitriou: On th k-Server conjecture
  • 06.06.2007,
    Josh Buresh-Oppenheim
    Simon Fraser University
    Formalizing Algorithmic Paradigms
    Since most algorithms that people use can intuitively be classified into large paradigms of algorithms such as greedy, dynamic programming, linear and semidefinite programming, local search, etc., it is natural to ask what problems can be computed by these paradigms. This question can be seen as an alternative to asking what problems can be computed by all, say, poly-time algorithms in that the restriction on the algorithms is more conceptual rather than complexity-theoretic. As we will illustrate, it is also a question of vital importance to algorithm designers. Of course, to ask a question about an algorithmic paradigm, you first have to formally define the paradigm. We offer one very natural model, pBT, which captures many algorithms generally considered to be dynamic programming or backtracking. We demonstrate upper and lower bounds in this model for problems such as interval scheduling and SAT. We also present a very powerful model for linear and semidefinite programming due to Lovasz and Schrijver and show some strong lower bounds for SAT.  
    Mateusz Kostanek
    On k-server problem
    Michał Staromiejski
    Elliptic curves
    Edyta Szymańska
    The complexity of perfect matchings in hypergraphs
    Given a k-uniform hypergraph H=(V,E) on n vertices, we define a perfect matching as a set of $\lfloor n/k\rfloor$ disjoint edges in E(H). From the algorithmic perspective, a few natural problems regarding this notion can be considered. One is a decision problem asking whether a given k-uniform hypergraph contains a perfect matching, which is NP-complete for k>2. In view of this fact, a question arises, under which additional conditions for a k-uniform hypergraph there exists a polynomial time algorithm finding a perfect matching in it. We define the minimum collective degree $\delta_{k-1}(H)$ to be the largest integer d such that every (k-1)-element set of vertices of H belongs to at least d edges of H. In this talk we will present an algorithm which finds a perfect matching in a k-uniform hypergraph of the minimum collective degree roughly n/2 in polynomial time. On the negative side, we will prove that the problem of deciding whether a given k-uniform hypergraph H with $\delta_{k-1}(H)> c|V(H)|$ for c<1/k contains a perfect matching is NP-complete.  
    Michał Staromiejski
    Elliptic curves
    Mateusz Juda
    Hypergraphs isomorphism problem
    Graph isomorphism (GI) is one of the few remaining problems in NP whose complexity status couldn't be solved by classifying it as being either NP-complete or solvable in P. Nevertheless, efficient (polynomial-time or even NC) algorithms for restricted versions of GI have been found over the last four decades. Depending on the graph class, the design and analysis of algorithms for GI use tools from various fields, such as combinatorics, algebra and logic. In this talk, we collect several complexity results on graph isomorphism testing and related algorithmic problems for restricted graph classes from the literature. Further, we provide some new complexity bounds (as well as easier proofs of some known results) and highlight some open questions  
  • J.Kobler, On Graph Isomorphism for Restricted Graph Classes
  • 04.04.2007,
    Jan Jeżabek
    Kamil Kloch, Piotr Micek, Grzegorz Matecki
    On-line chain partitioning of semi-orders
    Maciej Gazda
    Polar SAT and related graphs
  • Igor Zverovich, Olga Zverovich: "Polar SAT and related graphs"
  • 24.01.2007,
    Piotr Danilewski
    Hardness results for cake cutting
    Tomasz Jurkiewicz
    Towards a trichotomy for quantified H-coloring
    A very natural generalisation of graph coloring problems is defined in terms of graph homomorphism; the problem takes as input a graph G and accepts it if, and only if, there exists a homomorphism into a fixed graph H. This problem is known as the H-coloring problem and is tractable if H is bipartite and NP-complete otherwise. Quantified H-coloring problem is definable via two-player games and is tractable if H is bipartite; NP-complete if H is not bipartite and not connected; and, PSPACE-complete if H is connected and contains a unique cycle which is of odd length. (Conjecture: Problem is PSPACE-complete if H is not bipartite and connected.)  
  • B.Martin and F.Madeleine, Towards a trichotomy for quantified H-coloring
  • 10.01.2007,
    Arek Pawlik
    Page rank
    Wiktor Żelazny
    Recognizning Interval Bigraphs
    We introduce the class of interval bigraphs, bipartite graphs similiar to interval graphs. We present algorithm that recognizes these graphs in polynomial time, shown by Muller in 1993. Also a characterization of interval bigraphs in terms of their complement graphs due to Hell and Huang (2003) will be presented.
    Solutions to MWPZ 2006 problems
    Jacek Krzaczkowski
    Complexity of term equation problem, cntd.
    Tomasz Gorazd
    Complexity of term equation problem, cntd.
    Stefan Felsner TU Berlin
    Embeddings of Planar Graphs
    A graph is planar if it admits a crossing-free drawing in the plane. In the first instance, such a drawing can be everything but nice. I sketch approaches to obtain nice drawings. A particularly elegant method goes back to work of W. Schnyder. He succeded in producing straight-line embeddings of planar graphs on small grids. I show how Schnyder's ideas continue to produce new insights and results. In particular it turns out that good drawings in the plane can be obtained via a detour through dimension three.  
    Tomasz Gorazd
    Complexity of term equation problem
    We study the computational complexity of the problem of satisfiability of equation between terms over a finite algebra (TERM-SAT). We describe many classes of algebras where the complexity of TERM-SAT is determined by the clone of term operations. We classify the complexity for algebras generating the maximal clones. Using this classification we describe a lot of algebras where TERM-SAT is NP-complete. We classify the situation for clones generated by an order or a permutation relation. We introduce the concept of semiaffine algebras and show polynomial time algorithms solving the satisfiability problem for them.  
    Hal Kierstead
    Arizona State University
    18:15 On-line Ramsey Theory
    Two players, Builder and Painter, play the following game on a fixed set of vertices V. In one round Builder presents an edge e linking two previously independent vertices of V and Painter paints e using one of c colors. Builder’s goal is to force Painter to create a monochromatic copy of a fixed graph F. If there are no other restrictions then Painter has no chances to avoid F (by Ramsey’s theorem). But what if Builder is not allowed to construct a graph whose chromatic number exceeds that of F? We prove that even with this obstacle Builder wins this game for any number of colors. Our main tool is an auxiliary “Ramsey survival game”, which is interesting in it’s own right. (Joint work with Goran Konjevod)  
    Arkadiusz Pawlik
    Integer programming and counting lattice points in rational convex polyhedra
    It is possible to solve the linear programming problem in polynomial time, but if we require that the solution is integral, then the problem becomes NP-hard. However, as shown by Lenstra in 1983, if we fix the number of variables, then the problem is in P. I present a more recent approach which involves counting the solutions with generating functions.  
  • Alexander Barvinok, James E. Pommersheim, An Algorithmic Theory of Lattice Points in Polyhedra
  • 04.10.2006,
    Pawel Idziak
    Open problems
    Grzegorz Matecki
    On-line graph coloring on a bounded board
    We consider a version of on-line graph coloring problem as a two person game with some additional conditions for players. Players are called Spoiler and Painter. Spoiler reveals a graph by putting or removing a node. But at each time the total number of nodes is bounded. Painter must assign a color to any new node such as two nodes connected by an edge have different colors. He cannot change colors of already colored nodes.
    Chromatic number of such game for the class of graphs $C$, denoted by $\chi_C(p)$, is the smallest number of colors which is enough for Painter to color any graph presented by Spoiler, where $p$ is a bound for the size of graphs. We will show some lower and upper bounds of $\chi$-number for various classes of graphs.  
    Bartosz Walczak
    Flows in skew-symmetric networks
    The maximum integer skew-symmetric flow problem (MSFP) generalizes both the maximum flow and maximum matching problems. The idea behind the solution is to find a good initial flow and then to augment the flow along so-called regular paths. The initial flow can be a slightly modified antisymmetrization of an ordinary maximum flow. Finding regular augmenting paths is based on Edmonds' "blossom" algorithm for finding a maximum matching. The resulting algorithm for MSFP is competitive with the fastest known algorithms for the maximum flow and maximum matching problems.  
  • A. V. Goldberg, A. V. Karzanov. "Maximum skew-symmetric flows and matchings". Mathematical Programming 100, 537-568 (2004)
  • A. V. Goldberg, A. V. Karzanov. "Path problems in skew-symmetric graphs". Combinatorica 16, 127-174 (1996) (preliminary version)
  • 24.05.2006,
    Jarosław Grytczuk,
    Zielona Góra
    Graph coloring games for daltonists
    Ann and Ben are coloring alternately the vertices of a graph G using a fixed set of colors, with Ann playing first. They both have to respect the rule of a proper coloring, that is, none of them is allowed to create a monochromatic edge at any moment of the game. Ann's goal is to color the whole graph successfully, in which case she is a winner. Ben's goal is of course different: he perfidiously tends to create a partial coloring that cannot be extended to the whole graph, without introducing new colors. In this case he is a winner.

    The game chromatic number of a graph G, denoted g(G), is the least number of colors guaranteeing a win for Ann. The main open problem is to find out how large is g(G) for planar graphs. Curiously, currently best strategy allows Ann to win (with 17 colors) even as a completely color blind person! I will present this and other techniques in a most recent treatise.

    Joint work with Tomasz Bartnicki, Hal Kierstead and Xuding Zhu  

    Lech Duraj
    Tadeusz Prochwicz
    Algorithms for four variants of the exact satisfiability problem
  • V.Dahllof, P.Jonsson and R.Biegel, Algorithms for four variants of the exact satisfiability problem, Theoretical Computer Science, 320(2004), 373-394.
  • 26.04.2006,
    Gabor Kun,
    Loránd Eötvös University, Budapest
    Forbidden patterns and homomorphism problems
    We say that two subclasses of NP are (computationally) equivalent if for every language in one class there is a polynomially equivalent one in the other class. A typical equivalence class is the class of k-colouring problems (k in N), it is always in P or NP-complete. NP turned out not to be equivalent to this class (unless P=NP): there is a problem in NP which is neither in P nor NP-complete.

    We show some types of combinatorial problems like edge colourings or graph decompositions expressing the full computational power of the NP class. Hence these problem classes contain also some problems of "exotic" complexity.

    The first natural candidate that seems not to be equivalent to NP is the class called MMSNP (Monotone Monadic Strict NP) or Forbidden patterns problem. (A typical example for a language in the class: graphs with vertex set partitionable into two subsets without triangle.) This class is conjectured to contain only NP-complete and polynomial time solvable problems.

    We prove that the class MMSNP can be expressed in the simpler terminology of relational structure homomorphism problems (called Constraint Satisfaction Problems): such a language contains for a fixed structure A the relational structures that can be mapped to A. homomorphically. The first such result was only a random equivalence. The proof of the deterministic equivalence uses some type of expander structures, a typical tool in derandomization.  

    Jarosław Duda
    Optimal coding by random algorithms
    Each point of the plane grid Z^2 is labelled by 1 bit : "0" or "1". Forbiding two 1's to be adjacent reduces average information capacity (i.e., entropy) to 0.588 bit/point.
    The talk gives an intuitive background to symmetry, theory of information and statistical approach to combinatorial problems. We address and discuss the following problems:
    - how typical labeling looks like?
    - how to generate these labelings?
    - how to use these labelings to encode information?
    - how to use this stuff in practice?  
    Andrzej Soroczyński
    Ulam games
    Our investigation deal with searching lair game. Game was proposed by Ulam, and that is why we call it Ulam searching game with fixed number of lies. In this game one person (we call her Carole) thinks about one number from 1 to n. Second player (we call him Paul) is asking about some subset of {1,2,...,n}, and Carole reply if number she is thinking about is a member of Paul's subset. Carol is allowed to lie l(fixed number) times. Let L(l,M) be the minimum number of questions which Paul must ask to win. The main results of presented papers are:
    1. For each l there exist such M that for all N >= M the following is true: L(l,2N) <= L(l,N) + 2.
    2. For each l there exist such M that for all N >= M the following is true: L(l,3/2*N) <= L(l,N) + 1.  
  • C. Deppe, Strategies for the Renyi-Ulam Game with fixed number of lies, Theoret. Comput. Sci. 314 (2004), 45-55
  • J. Spencer, Ulam's searching game with fixed number of lies, Theoret. Comput. Sci. 95 (1992), 307-321
  • 22.02.2006,
    Andrzej Pezarski
    On line clique covering of proper interval graphs presented in a connected way
    Proper interval graphs are graphs for which there is a representation by intervals of real line in which no interval is contained in another. After an easy observation that all greedy algorithms have competive ratio bigger than 2 we consider all possible algorithms. Now the situation is harder: a proof that 8/5 is a lower bound in this situation will be presented.  
    Marcin Kozik
    2EXPTIME-complete membership problems in algebra
    We construct a finite algebra generating a variety with PSPACE-complete membership problem first. Then we show another algebra with exponentially growing gamma function. In the final construction we use both of the previously mentioned algebras to produce a finite algebra that is able to model a computation of a Turing machine on an exponentially long tape. This gives an example of a finite algebra with EXPSAPCE-hard membership problem (on the other hand this problem is known to be in a 2-EXPTIME class).  
    Wojciech Jawor,
    University of California, Riverside
    Job Scheduling in Next Generation Computer Networks
    Two online job scheduling problems arising in next generation computer networks are discussed.

    In the first problem [3] the goal is to schedule n jobs on m identical machines, without preemption, so that the jobs complete in the order of release times and the maximum flow time is minimized. This problem arises in network systems with aggregated links, when it is required that packets complete their arrivals at the destination in the order of their arrivals at the receiver. This requirement is imposed by the IEEE 802.3 standard describing link aggregation in Local Area Networks. We present a deterministic algorithm Block with competitive ratio O(\sqrt{n/m}) and show a matching lower bound even for randomized algorithms.

    The second problem [1,2] is an online unit-job scheduling problem arising in networks supporting Quality of Service. Jobs are specified by release times, deadlines and nonnegative weights. The goal is to maximize the total weight of jobs, that are scheduled by their deadlines. We show that there does not exist a deterministic algorithm with competitive ratio better than 1.618 (the golden ratio). We also give a randomized algorithm with competitive ratio 1.582, showing that randomized algorithms are provably better than deterministic algorithms for this problem.  

  • F.Y.L.Chin, M.Chrobak, S.P.Y.Fung, W.Jawor, J.Sgall and T.Tichy, Online Competitive Algorithms for Maximizing Weighted Throughput of Unit-Jobs
  • W.Jawor, M.Chrobak and Ch.Dürr, Competitive Analysis of Scheduling Algorithms for Aggregated Links
  • 14.12.2005,
    Maciej Żenczykowski
    Online interval coloring and variants
  • L. Epstein, M. Levy Online interval coloring and variants, Proc. of the 32nd ICALP (2005), 602-613
  • 07.12.2005,
    Problems from Programming Competitions in Poznan 2005
  • Contest home page:
  • 30.11.2005,
    Piotrek Micek
    The lower bound for on-line cliques covering for K_s-free graphs
    Iwona Cieslik
    On-line coloring for graphs with forbidden subgraphs
    It is known that the on-line coloring problem for arbitrary graphs is not competitive. However, restricting to special families of graphs, that have forbidden induced subgraphs (of some kind), the spoiler has his hands tied and the number of colors used by some on-line algorithms can be substantially reduced. We call these subgrahs: forcing structures. In my work I try to make a classification of competitive functions for various kind of families of graphs and also appoint the forcing structures for the on-line graph coloring. I was mostly looking for competitiveness for the graphs in the form of H-free: K_s-free, K_s,t-free, C_4-free, P_5-free and also for perfect and k-chordal graphs.  
    Kamil Kloch
    EuroComb 2005 - Open Problems
    A few open problems from the recent EuroComb conference ( is presented. These include the L(p, 1) labelling of graphs and the excluded subposets in the Boolean lattice.  
    Lech Duraj
    Interval orders and dimension
    Each interval order can be embedded into interval orders of sufficiently large dimension. More precisely, if X is an interval order, there exists a number t = t(X), such that for every interval order Y of dimension at least t, Y contains a subposet isomorphic to X. This is proven by embedding interval orders into some fixed structure, called "a thicket", and then finding thickets in all orders of sufficiently large dimension.  
    Ralph McKenzie,
    Vanderbilt University, Nashville TN, USA
    Operations on finite structures
    Nobu-Yuki Suzuki,
    Shizuoka University, Japan
    Epistemic logic and game theory
    A brief overview of epistemic logics and their game-theoretical applications is given. This interdisciplinary field has many aspects arising from various research lines. In this talk, I give an outline of the recent developments of epistemic-logical approach to decision making process in game-theoretical situations.  
    Grzegorz Łukasik
    Mobile Robots Contests
    Mobile Robots Contests organized by Computer Science and Technology University is presented: rules of the contest, the problems that students were solving and intresting solutions that were implemented. The first three places were taken by teams from Jagiellonian University.  
  • Contest home page:
  • 18.05.2005,
    Michael Sołtys,
    McMaster University,
    Hamilton, ON, Canada
    P vs NP
    The "P vs NP" problem is a central problem of theoretical computer science, and indeed of mathematics (it was named one of the seven Millennium Problem by the Clay Mathematical Institute, and there is a one million dollar prize for a solution: Despite thirty years of intense efforts, we are not near a solution, and we do not have promising techniques to tackle this problem. For example, since diagonal arguments "relativize", they will probably not work in this case. Exponential lower bounds on Boolean circuits are also hopelessly difficult to obtain. De-randomizing turns out to be just as difficult as separating P and NP. Cook proposed an attack based on a program of finding lower bounds for stronger and stronger propositional proof systems, building a repertoire of techniques and lower bounds, and ultimately showing that no polynomially bounded propositional proof system exists. The consequence of that would be that NP is not equal to coNP, and thus P is not equal to NP. In this talk, I will concentrate on this line of attack.  
    Lech Palmowski
    Seventeen lines and one-hundred-and-one points
    A curious problem from additive number theory is investigated: Given two positive integers S, Q, does there exist a sequence of positive integers that add up to S and whose squares add up to Q? We show that this problem can be solved in time polynomially bounded in the logarithms of S and Q. As a consequence, also the following question can be answered in polynomial time: For given numbers n and m, do there exist n lines in the Euclidean plane with exactly m points of intersection?  
  • G. J. Woeginger, Seventeen lines and one-hundred-and-one points, Theoretical Computer Science 321 (2004), 415-421
  • 04.05.2005,
    Grzegorz Gutowski
    Computational aspects of the 2-dimension of partially ordered sets
    A well-known method to represent a partially ordered set consists in associating to each element of P a subset of a fixed set S={1,..,k} such that the order relation coincides with subset inclusion. Given an order P, minimizing the size of the encoding, i.e. the cardinal of S, is however a difficult problem. The smallest size is called the 2-dimension of P. The paper details a proof that computing 2-dimension of a given poset is NPC. Reduction allows to prove the non-approximability of the problem. Later on the complexity of the 2-dimension for the class of trees is investigated. Authors present a 4-approximation algorithm for this class.  
  • M. Habibb, L. Nourinea, O. Raynauda and E. Thierry, Computational aspects of the 2-dimension of partially ordered sets, Theoretical Computer Science 312 (2004), 401-431
  • 20.04.2005,
    Grzegorz Herman
    The Complexity of Random Ordered Structure
    One of the ways of expressing the complexity of a structure is the minimal "depth" of quantifiers in a formula that describes it. The presented paper discusses such complexity of two types of structures. The authors prove that the complexity of a random bit string is O(loglogn), with high probability, and the complexity of an ordered random graph - Theta(log*n), with high probability.  
  • J.H. Spencer, K.St.John , The Complexity of Random Ordered Structure
  • 23.03.2005,
    Mikołaj Zalewski
    On-line Ramsey Theory
    The Ramsey game is played on an unbounded set of vertices by two players, called Builder and Painter. In one move Builder introduces a new edge and Painter paints it red or blue. The goal of Builder is to force Painter to create a monchromatic copy of a fixed target graph H, keeping the constructed graph in a prescribed class G. The main problem is to recognize the winner for a given pair H, G. In particular, authors of paper prove that Builder has a winning strategy for any k-colorable graph H in the game played on k-colorable graphs. Another class of graphs with this strange self-unavoidability property is the class of forests. They show that the class of outerplanar graphs does not have this proberty. The question of whether planar graphs are self-unaviodable is left open.  
  • J.A. Grytczuk, M. Haluszczak, H.A.Kierstead, On-line Ramsey Theory, The Elektronic Journal of Combinatorics 11(1), 2004
  • 09.03.2005,
    Wiktor Żelazny
    Online Algorithms for Market Clearing
    Randomized on-line algorithms that maximalize profit and liquidity of marketplace are presented. They work by finding a maximal set with perfect matching in subgraph of bid history graph - before incoming intervals are applied, some of them (or some of graph edges from incoming vertices) are removed, based at random criteria. Proof that estimated profit produced by optimal algorithm cannot be greater then estimated profit of presented algorithm was also shown.  
  • A. Blum, T. Sandholm, M. Zinkevich, Online Alghoritms for Market Clearing, manuscript (2002)
  • 01.03.2005,
    Wiktor Żelazny
    Online Algorithms for Market Clearing
    This talk presents an online algorithm able to find maximal set W of vertexes of n-interval graph, such that a perfect matching exists on W. The alghoritm works on interval representation of graph and new intervals must be introduced in legal way (described in previous talk) for it to work, but it's competitive ratio is equal 1. Proof of alghoritm's optimality was presented.  
  • A. Blum, T. Sandholm, M. Zinkevich, Online Alghoritms for Market Clearing, manuscript (2002)
  • 23.02.2005,
    Jacek Krzaczkowski
    Counting solutions of equations over two-element algebras.
    I refered my results connected with counting solutions of equations and systems of equations over fixed two-element algebra. It came out that the computational complexity of these problems depends only on termal clone of the algebra. I presented the classification of the computational complexity of the problems mentioned above.  
    Wiktor Żelazny
    Online Algorithms for Market Clearing
    A n-interval graph is created from interval graph by painting it's vertexes with n colours and removing all edges between vertices of the same colour. This talk introduced n-interval graphs, as well as the way their interval representation can represent history of buy and sell bids on exchange auction. Objectives of maximalizing marketplace's profit and liquidity are formulated as online problems, as new intervals are being introduced every time a new bid appears. A way of legal introduction of new intervals, corresponding to apperance of new bids, is described.  
  • A. Blum, T. Sandholm, M. Zinkevich, Online Alghoritms for Market Clearing, manuscipt (2002)
  • 12.01.2005,
    Grzegorz Gronkowski
    A boolean function requiring 3n network size.
    First part of talk contains a simple proof, that there exist functions with a nonlinear lower bound for the network complexity. The proof is based on a counting method. Afterwards I present Norbert Bloom's result, who defined n-ary boolean function with a 3n lower bound. The proof shows, that 3n-3 is a minimal number of logic gates which is necessary for computing this function.  
  • Norbert Bloom, A boolean function requiring 3n network size.
  • 05.01.2005,
    T. Krawczyk,
    E. Szczypka
    Comparability graphs.
    A graph G=(V,E) is a comparability graph if there exists a partial order (V,<) such that there exists an edge between vertices v and w iff either v < w or w < v. In our talk we presented a polynomial time algorithm (the best known that solves a similar problem - Spinrad, McConnell - has a complexity O(n+m)) for deciding whether a given graph G=(V,E) is a comparability graph. In our method we investigated relations that exist between sets of neighborhoods in comparability graphs.  
    Marek Kwiatkowski
    Dr. Frankenstein's Approach to On-line Algorithms
    Let A1...An be on-line algorithms for a problem P, competitive over subsets I1...In of inputs respectively. In this talk we show that under certain assumptions on P and I1...In it is possible to give an on-line algorithm for P that is competitive over all possible inputs. Two solutions are given: for deterministic and randomized algorithms.  
  • Yossi Azar, Andrei Z. Broder, Mark S. Manasse "Dr. Frankenstein's Approach to On-line Algorithms" (extended abstract)
  • 08.12.2004,
    Krzysztof Maczyński
    P-time algorithm for tree decomposition
    In my talk I presented a polynomial time algorithm for deciding whether a given tree is arbitrarily vertex decomposable (AVD) in a limited sense concerning no more than a constant number of components and if so, constructing one of possibly exponentially many such decompositions. I also assumed a constant bound on the maximal degree of the tree but this condition was shown to be unnecessary by a slight modification to my algorithm proposed by Mikołaj Zalewski. Additionally I detailed a construction of a maximally long greedy sequence over a set of colours whose cardinality is given. I characterized all sequences of equal length, proved that no longer ones exist and explicited a cubic function computing the length from the number of colours allowed.  
    Jan Jeżabek
    Graphs with high girth and high chromatic number
    This talk introduces a basic tool of the probabilistic method - the first moment method. The method is illustrated with an application to satisfiability problems. A simple theorem is presented stating that any instance of k-SAT with fewer than 2^k clauses is satisfiable. The first moment method is then used to prove that for every g, k >= 1 there exist graphs with no cycles of size g and with chromatic number greater than k.  
  • M.Molloy, The Probabilistic Method, in: M. Habib, C. McDiarmid, J. Ramirez-Alfonsin, B. Reed, Probabilistic Methods for Algorithmic Discrete Mathematics, Springer 1998
  • 24.11.2004,
    Piotr Micek
    Natural algorithm for Online Chain Covering of Upgrowing Interval Posets
    We have already proven that a competitive ratio for Online Chain Covering of Upgrowing Posets is equal to 2. At this talk I present a simple online algorithm which has optimal competitive ratio.  
    Bartłomiej Bosek
    Lowerbound for Online Chain Partitioning of Upgrowing Interval Posets
    Bartek presents that there is no online algorithm for chain covering problem of upgrowing posets using less than 2w-1 chains to cover a width w poset.  
    Paweł Walter
    Online Bin Packing with two item sizes
    In the well-studied on-line bin packing problem (BPP) we are given a set of items and a sequence of their sizes and are required to pack them into a minimum number of unit-capacity bins. The problem of finding the competitive ratio in the general case is open. In the analysed paper BPP is considered restricted to the case of two distinct item sizes. My talk based on this paper consists of an algorithm solving this particular case at an asymptotic competetive ratio of 4/3 and a proof that 4/3 is also here a valid lower bound.  
  • G.Gutin, T.Jensen, A.Yeo, Optimal on-line bin packing with two item sizes, manuscript (2004)
  • 13.10.2004,
    Tomek Krawczyk
    NP-completeness of posets embedding into boolean lattice
    Let p < q and p,q from N. A bipartitie graph G=(X \cup Y,E) embeds into a lattice of subsets on levels p and q if there exist n from N, injections f: X --> {n \choose p}, g: Y--> {n \choose q} such that there is an edge between x from X and y from Y iff f(x) < g(y). In my talk I proved that the problem of deciding whether a given bipartite graph G=(X \cup Y,E) embeds into lattice of subsets on levels p and q is NP-complete if p>1 and is in P if p=1.  
    Piotr Micek
    Open Problems from Workshop on On-Line Algorithms 2004
      webmaster: email = a@b, a=www-tcs,