• Non ci sono risultati.

Constructive Learning Modulo Theories

N/A
N/A
Protected

Academic year: 2021

Condividi "Constructive Learning Modulo Theories"

Copied!
4
0
0

Testo completo

(1)

000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109

Constructive Learning Modulo Theories

Stefano Teso TESO@DISI.UNITN.IT

DISI, University of Trento, Italy

Roberto Sebastiani ROBERTO.SEBASTIANI@UNITN.IT

DISI, University of Trento, Italy

Andrea Passerini PASSERINI@DISI.UNITN.IT

DISI, University of Trento, Italy

1. Introduction

Modelling problems containing a mixture of Boolean and numerical variables is a long-standing interest of Artificial Intelligence. However, performing inference and learning in hybrid domains is a particularly daunting task. The abil-ity to model these kind of domains is crucial in “learn-ing to design” tasks, that is, learn“learn-ing applications where the goal is to learn from examples how to perform auto-matic de novo design of novel objects. Apart from hybrid Bayesian Networks, for which efficient inference is limited to conditional Gaussian distributions, there is relatively lit-tle previous work on hybrid methods. The few existing at-tempts (Goodman et al.,2008;Wang & Domingos,2008;

N¨arman et al.,2010;Gutmann et al.,2011;Choi & Amir;

Islam et al.,2012) impose strong limitations on the type of constraints they can handle. Inference is typically run by approximate methods, based on variational approxima-tions or sampling strategies. Exact inference, support for hard numeric (in addition to Boolean) constraints and com-bination of diverse theories are out of the scope of these approaches. In order to overcome these limitations, we fo-cused on the most recent advances in automated reasoning over hybrid domains. Researchers in automated reasoning and formal verification have developed logical languages and reasoning tools that allow for natively reasoning over mixtures of Boolean and numerical variables (or even more complex structures). These languages are grouped un-der the umbrella term of Satisfiability Modulo Theories (SMT) (Barrett et al., 2009). Each such language corre-sponds to a decidable fragment of First-Order Logic aug-mented with an additional background theory T , like lin-ear arithmetic over the rationals LRA or over the integers LIA. SMT is a decision problem, which consists in

find-Preliminary work. Under review by the Constructive Machine Learning workshop @ ICML 2015. Do not distribute. An ex-tended version of this work was accepted for publication at the Artificial Intelligence Journal.

ing an assignment to both Boolean and theory-specific vari-ables making an SMT formula true. Recently, researchers have leveraged SMT from decision to optimization. The most general framework is that of Optimization Modulo Theories(OMT) (Sebastiani & Tomasi,2015), which con-sists in finding a model for a formula which minimizes the value of some (arithmetical) cost function defined over the variables in the formula.

In this paper we present Learning Modulo Theories (LMT), a max-margin approach for learning in hybrid domains based on Satisfiability Modulo Theories, which allows to combine Boolean reasoning and optimization over con-tinuous linear arithmetical constraints. The main idea is to combine the discriminative power of structured-output SVMs (Tsochantaridis et al.,2005) with the reasoning ca-pabilities of SMT technology. Training structured-output SVMs requires a separation oracle for generating counter-examples and updating the parameters, while an inference oracle is required at prediction stage to generate the highest scoring candidate structure for a certain input. Both tasks can be accomplished by a generalized Satisfiability Mod-ulo Theory solver. We show the potential of LMT on an artificial layout synthesis scenario.

2. LMT as structured-output learning

Structured-output SVMs (Tsochantaridis et al.,2005) gen-eralize max-margin methods to the prediction of output structures by learning a scoring function over joint input-output pairs f(I, O) = wTψ(I, O). Given an input I, the predicted output will be the one maximizing the scoring function:

O∗= argmax O

f(I, O) (1)

and the problem boils down to finding efficient procedures for computing the maximum. Efficient exact procedures exist for some special cases. In this paper we are

(2)

inter-110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219

Constructive Learning Modulo Theories

ested in the case where inputs and ouputs are combina-tions of discrete and continuous variables, and we’ll lever-age SMT techinques to perform efficient inference in this setting. Max-margin learning is performed by enforcing a margin separation between the score of the correct struc-ture and that of any possible incorrect strucstruc-ture, possibly accounting for margin violations to be penalized in the ob-jective function. A cutting plane (CP) algorithm allows to deal with the exponential number of candidate structures by iteratively adding the most violated constraint for each ex-ample pair given the current scoring function, and refining it by solving the augmented quadratic problem with a stan-dard SVM solver. The CP algorithm is generic, meaning that it can be adapted to any structured prediction problem as long as it is provided with: (i) a joint feature space repre-sentation ψ of input-output pairs (and consequently a scor-ing function f ); (ii) an oracle to perform inference, i.e. to solve Equation (1); and (iii) an oracle to retrieve the most violated constraint of the QP, i.e. to solve the separation problem:

argmax O0

wTψ(Ii, O0) + ∆(Ii, Oi, O0) (2)

where ∆(Ii, Oi, O0) is a loss function quantifying the penalty incurred when predicting O0instead of the correct output O. In the following we will detail how these com-ponents are provided within the LMT framework.

input-output feature map Recall that in our setting each object(I, O) is represented as a set of Boolean and rational variables: (I, O) ∈ ({>, ⊥} × . . . × {>, ⊥}) | {z } Boolean part × (Q × . . . × Q) | {z } rational part We indicate Boolean variables using predicates such as touching(i, j), and write rational variables as lower-case letters, e.g. cost, distance, x, y. Features are represented in terms of (soft) constraints {ϕk}mk=1, each constraint ϕk being either a Boolean- or rational-valued function of the object(I, O). These constraints are constructed using the background knowledge available for the domain. For each Boolean-valued constraint ϕk, we denote its indicator functionas1k(I, O), which evaluates to 1 if the constraint is satisfied and to −1 otherwise. Similarly, we refer to the costof a rational-valued constraint ϕk as ck(I, O) ∈ Q. The feature vector ψ(I, O) is obtained by concatenating indicator and cost functions of Boolean and rational con-straints respectively.

inference oracle Given the feature vector ψ(I, O), the scoring function f(I, O) is a linear combination of indi-cator and cost functions. Since ψ can be expressed in

terms of SMT(LRA) formulas, the resulting maximiza-tion problem can be readily cast as an OMT(LRA) prob-lem and inference is performed by an OMT-solver (we use the OPTIMATHSAT solver1). Given that OMT-solvers are

conceived to minimize cost functions rather than maximize scores, we actually run it on the negated scoring function. separation oracle The separation problem amounts at maximizing the sum of the scoring function and a loss func-tion over output pairs (see eq.(2)). We observe that by pick-ing a loss function expressible as an OMT(LRA) problem, we can readily use the same OMT solver used for inference to also solve the separation oracle. This can be achieved by selecting a loss function such as the Hamming loss in fea-ture space:

∆(I, O, O0) := kψ(I, O) − ψ(I, O0)k1

This loss function is piecewise-linear, and as such satisfies the desideratum. While this is the loss which was used in all experiments, LMT can work with any loss function that can be encoded as an SMT formula.

3. A Stairway to Heaven

We show the potential of the LMT framework on a toy con-structive problem which consists in learning how to assem-ble different kinds of stairways from examples. A stairway is made up of a collection of m blocks (rectangles) in a 2D unit-sized bounding box. Figure1shows examples of different types of stairways (a-c), varying in terms of orien-tationand proportions between block heights and widths, and a configuration which is not a stairway (d).

0 1 0 1 (a) 1 2 0 1 0 1 (b) 1 2 0 1 0 1 (c) 1 2 0 1 0 1 (d) 1 2

Figure 1. (a) A left ladder stairway. (b) A right ladder 2-stairway. (c) A right pillars 2-2-stairway. (d) not a 2-stairway.

Each block is identified by a tuple(x, y, dx, dy), consisting of bottom-left coordinates, width and height. An output O is given by the set of tuples identifying the blocks. Input is assumed to be empty in the following, but non-empty inputs can used to model for instance partially observed scenes.

Learning amounts to assigning appropriate weights to a set of soft-constraints in order to maximize the score of stair-ways of the required type, with respect to non-stairstair-ways

(3)

220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329

Constructive Learning Modulo Theories

and stairways of other types. Inference amounts to gen-erating configurations (i.e. variables assignments to all blocks) with maximal score, and can be easily encoded as an OMT(LRA) problem. As a first step, we define a set of hard rules to constrain the space of admissible block assign-ments. We require that all blocks fall within the bounding box, and that blocks do not overlap:

∀i 0 ≤ xi, dxi, yi, dyi≤ 1

∀i 0 ≤ (xi+ dxi) ≤ 1 ∧ 0 ≤ (yi+ dyi) ≤ 1 ∀i 6= j (xi+ dxi≤ xj) ∨ (xj+ dxj ≤ xi) ∨

(yi+ dyi≤ yj) ∨ (yj+ dyj ≤ yi)

Furthermore, we require (without loss of generality) blocks to be ordered from left to right, ∀i xi ≤ xi+1. Note that stairway conditions (apart from non-overlap between blocks) are not included in these hard rules and will be im-plicitly modelled as soft constraints. To this aim we intro-duce a set of useful predicates. We use four predicates to encode the fact that a block i may touch one of the four corners of the bounding box, e.g.:

bottom right(i) := (xi+ dxi) = 1 ∧ yi= 0 We also define predicates to describe the relative positions of two blocks i and j, such as left(i, j):

left(i, j) := (xi+ dxi) = xj∧ ((yj ≤ yi≤ yj+ dyj)∨ (yj≤ yi+ dyi≤ yj+ dyj)) that encodes the fact that block i is touching block j from the left. Similarly, we also define below(i, j) and over(i, j). Finally, we combine the above predicates to define the concept of step:

left step(i, j) := (left(i, j) ∧ (yi+ dyi) > (yj+ dyj)) ∨ (over(i, j) ∧ (xi+ dxi) < (xj+ dxj)) We define right step(i,j) in the same manner. This background knowledge allows to encode the property of being a stairway of a certain type as a conjunction of pred-icates. However, our inference procedure does not have access to this knowledge. We rather encode an appropri-ate set of soft rules (costs) which, along with the associ-ated weights, should bias the optimization towards block assignments that form a stairway of the correct type. Our cost model is based on the observation that it is possible to discriminate between the different stairway types using only four factors: minimum and maximum step size, and amount of horizontal and vertical material. For instance, in the cost we account for both the maximum step height of all left steps (a good stairway should not have too high steps) and the minimum step width of all right steps (good

stairways should have sufficiently large steps):

maxshl= m × max i∈[1,m−1]      (yi+ dyi) − (yi+1+ dyi+1) if i, i+ 1 form a left step 1 otherwise minswr= m × min i∈[1,m−1]      (xi+1+ dxi+1) − (xi+ dxi) if i, i+ 1 form a right step 0 otherwise

The value of these costs depends on whether a pair of blocks actually forms a left step, a right step, or no step at all. Finally, we write the average amount of vertical ma-terial as vmat = 1

m P

idyi. All the other costs can be written similarly. Putting all the pieces together, the com-plete cost is:

cost := (maxshl, minshl, maxshr, minshr, maxswl, minswl, maxswr, minswr, vmat, hmat) w

The actual weights are learned, allowing the learnt model to reproduce whichever stairway type is present in the training data.

To test the stairway scenario, we generated “perfect” stair-ways of2 to 6 blocks for each stairway type to be used as training instances. We then learned a model using all train-ing instances up to a fixed number of blocks (e.g from2 to 4) and asked the learnt models to generate configurations with a larger number of blocks (up to10) than those in the training set. The results can be found in Figure2.

The experiment shows that LMT is able to solve the stair-way construction problem, and can learn appropriate mod-els for all stairway types, as expected. Some imperfections can be seen when the training set is too small (e.g., only two training examples; first row of each table), but already with four training examples the model is able to generate perfect10-block stairways of the given type, for all types.

4. Conclusions

Albeit simple, the stairway application showcases the abil-ity of LMT to handle learning in hybrid Boolean-numerical domains characterized by complex combinations of hard and soft constraints, whereas other formalisms are not suited for the task2.The application is a simple instance

2

The stairway problem can be easily encoded in the Chuch probabilistic programming language. However, the sampling strategies used for inference are not conceived for performing op-timization with hard continuous constraints. Even the simple task of generating two blocks, conditioned on the fact that they form a step, is prohibitively expensive. A more comprehensive discus-sion on alternative approaches and their limitations can be found in the journal version of this work.

(4)

330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439

Constructive Learning Modulo Theories

# train. # output blocks

blocks 3 6 7 8 9 10 3 6 7 8 9 10 Results for “ladder” stairways (left and right)

2 to 3 2 to 4 2 to 5 2 to 6

Results for “vertical pillar” stairways (left and right) 2 to 3

2 to 4 2 to 5 2 to 6

Results for “horizontal pillar” stairways (left and right) 2 to 3

2 to 4 2 to 5 2 to 6

Figure 2. Results for the stairway construction problem. From top to bottom: results for the ladder, horizontal pillar, and ver-tical pillarcases. Number of blocks in training and generated images are reported in rows and columns respectively.

of a layout problem, where the task is to find an optimal layout subject to a set of constraints. Automated or in-teractive layout synthesis has a broad range of potential applications, including urban pattern layout (Yang et al.,

2013), decorative mosaics (Hausner, 2001) and furniture arrangement (Yu et al.,2011). Note that many spatial con-straints can be encoded in terms of relationships between blocks (Yu et al., 2011). Existing approaches typically design an energy function to be minimized by stochastic search. Our approach suggests how to automatically iden-tify the relevant constraints and their respective weights, and can accomodate hard constraints and exact search. This is especially relevant for water-tight layouts (Peng et al.,

2014), where the whole space needs to be filled (i.e. no gaps or overlaps) by deforming basic elements from a pre-determined set of templates (as in residential building lay-out (Merrell et al.,2010)). Generally speaking, the LMT framework allows to introduce a learning stage in all ap-plication domains where SMT and OMT approaches have shown their potential, ranging, e.g., from engineering of chemical reactions (Fagerberg et al.,2012) to synthetic bi-ology (Yordanov et al.,2013).

References

Barrett, C., Sebastiani, R., Seshia, S. A., and Tinelli, C. Satisfiability modulo theories. In Handbook of Satisfia-bility, chapter 26, pp. 825–885. IOS Press, 2009. Choi, Jaesik and Amir, Eyal. Lifted relational variational

inference. In UAI’12, pp. 196–206, 2012.

Fagerberg, R., Flamm, C., Merkle, D., and Peters, P. Ex-ploring chemistry using smt. In CP’12, pp. 900–915, 2012.

Goodman, Noah D., Mansinghka, Vikash K., Roy, Daniel M., Bonawitz, Keith, and Tenenbaum, Joshua B. Church: a language for generative models. In UAI, pp. 220–229, 2008.

Gutmann, B., Jaeger, M., and De Raedt, L. Extend-ing problog with continuous distributions. In Inductive Logic Programming, volume 6489, pp. 76–91. 2011. Hausner, Alejo. Simulating decorative mosaics. In

SIG-GRAPH ’01, pp. 573–580, 2001.

Islam, M. A., Ramakrishnan, C. R., and Ramakrishnan, I. V. Inference in probabilistic logic programs with con-tinuous random variables. Theory Pract. Log. Program., 12(4-5):505–523, September 2012. ISSN 1471-0684. Merrell, Paul, Schkufza, Eric, and Koltun, Vladlen.

Computer-generated residential building layouts. ACM Trans. Graph., 29(6):181:1–181:12, December 2010. N¨arman, P., Buschle, M., K¨onig, J., and Johnson, P. Hybrid

probabilistic relational models for system quality analy-sis. In EDOC, pp. 57–66. IEEE Computer Society, 2010. Peng, Chi-Han, Yang, Yong-Liang, and Wonka, Peter. Computing layouts with deformable templates. ACM Trans. Graph., 33(4):99:1–99:11, July 2014.

Sebastiani, Roberto and Tomasi, Silvia. Optimization Mod-ulo Theories with Linear Rational Costs. ACM Transac-tions on Computational Logics, 16, 2015.

Tsochantaridis, Ioannis, Joachims, Thorsten, Hofmann, Thomas, and Altun, Yasemin. Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res., 6:1453–1484, December 2005.

Wang, Jue and Domingos, Pedro. Hybrid markov logic networks. In AAAI’08, pp. 1106–1111, 2008.

Yang, Yong-Liang, Wang, Jun, Vouga, Etienne, and Wonka, Peter. Urban pattern: Layout design by hier-archical domain splitting. ACM Trans. Graph., 32(6): 181:1–181:12, November 2013.

Yordanov, Boyan, Wintersteiger, Christoph M., Hamadi, Youssef, and Kugler, Hillel. Smt-based analysis of bi-ological computation. In NASA Formal Methods Sympo-sium 2013, pp. 78–92, May 2013.

Yu, Lap-Fai, Yeung, Sai-Kit, Tang, Chi-Keung, Terzopou-los, Demetri, Chan, Tony F., and Osher, Stanley J. Make it home: Automatic optimization of furniture arrange-ment. ACM Trans. Graph., 30(4):86:1–86:12, July 2011.

Riferimenti

Documenti correlati

Their interest derives entirely from the fact that they have given some hints about a possible way out from the difficulties characterizing standard quantum mechanics, by proving

[r]

Moreover, since by “penalising sales” the owners can obtain a larger wage reduction by unions, and sales penalisation is stronger when firms compete in prices, firms’

It can be noted that in both approaches as presented in figures 1 and 2, a lower marginal saving propensity has no positive effect of long run term growth, although it

Assumption: the limit state is predicted to occur at the material point when the.. maximum principal stress reaches the limit value

Mechanical Engineering Design - N.. Effect of temperature on fatigue 3) Thermal fatigue. 1) Fatigue failure is produced by fluctuating

The Disintegration of Libussa. JOHN PIZER he tale of Libussa, the mythic leader of the Bohemian people and founder of. T the city of Prague, has inspired the creative attention

The panel is unbalanced (average time length 9.5 years) since a large number of new airports have been progressively included in the expansion of the LCC network. Table