

Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa26317; 1 Jun 93 21:30:00 EDT
Received: from Q.CS.CMU.EDU by q.cs.CMU.EDU id aa02118; 1 Jun 93 20:47:43 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa02115;
          1 Jun 93 20:21:56 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa23285;
          1 Jun 93 20:21:20 EDT
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa25831; 1 Jun 93 20:12:43 EDT
Received: from watson.ibm.com by EDRC.CMU.EDU id aa19262; 1 Jun 93 20:12:28 EDT
Received: from YKTVMH by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 6749;
   Tue, 01 Jun 93 20:07:39 EDT
Date: Tue, 1 Jun 93 20:07:38 EDT
From: Gerald Tesauro <tesauro@watson.ibm.com>
To: connectionists@cs.cmu.edu
Subject: TD-Gammon paper available in neuroprose

The following paper, which has been accepted for publication
in Neural Computation, has been placed in the neuroprose
archive at Ohio State. Instructions for retrieving the paper
by anonymous ftp are appended below.

---------------------------------------------------------------
   TD-Gammon, A Self-Teaching Backgammon Program,
          Achieves Master-Level Play

              Gerald Tesauro
     IBM Thomas J. Watson Research Center
               P. O. Box 704
         Yorktown Heights, NY 10598
          (tesauro@watson.ibm.com)

Abstract:
TD-Gammon is a neural network that is able to teach
itself to play backgammon solely by playing against
itself and learning from the results, based on the
TD(lambda) reinforcement learning algorithm (Sutton, 1988).
Despite starting from random initial weights (and hence
random initial strategy), TD-Gammon achieves a surprisingly
strong level of play.  With zero knowledge built in at the
start of learning (i.e. given only a ``raw'' description
of the board state), the network learns to play at a strong
intermediate level.  Furthermore, when a set of hand-crafted
features is added to the network's input representation, the
result is a truly staggering level of performance:
the latest version of TD-Gammon is now estimated to
play at a strong master level that is extremely close to the
world's best human players.
---------------------------------------------------------------
FTP INSTRUCTIONS

     unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52)
     Name: anonymous
     Password: (use your e-mail address)
     ftp> cd pub/neuroprose
     ftp> binary
     ftp> get tesauro.tdgammon.ps.Z
     ftp> bye
     unix% uncompress tesauro.tdgammon.ps
     unix% lpr tesauro.tdgammon.ps

Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa08334; 2 Jun 93 21:01:59 EDT
Received: from Q.CS.CMU.EDU by Q.CS.CMU.EDU id aa05705; 2 Jun 93 20:30:12 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa05702;
          2 Jun 93 20:04:01 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa26951;
          2 Jun 93 20:03:11 EDT
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa27059; 1 Jun 93 23:56:54 EDT
Received: from wattle.qut.edu.au by CS.CMU.EDU id aa12527; 1 Jun 93 23:55:46 EDT
Received: from fitmail.fit.qut.edu.au by qut.edu.au; Wed, 2 Jun 93 13:55 +1000
Received: by fitmail.fit.qut.edu.au (16.8/16.2) id AA09064; Wed, 2 Jun 93
 13:52:47 +1000
Date: Wed, 2 Jun 93 13:52:47 +1000
From: Joachim Diederich <joachim@fitmail.fit.qut.edu.au>
Subject: Brisbane Neural Network Workshop
To: connectionists@cs.cmu.edu
Cc: joachim@fitmail.fit.qut.edu.au
Message-id: <2929F4367ADF20D062@qut.edu.au>
X-Envelope-to: connectionists@cs.cmu.edu


           First Brisbane Neural Network Workshop
           --------------------------------------

            Queensland University of Technology
                Brisbane Q 4001, AUSTRALIA
               Gardens Point Campus, ITE 303
                        4 June 1993

The first Brisbane Neural Network Workshop  is  intended  to
bring together those interested in neurocomputing and neural
network applications.  The objective of the workshop  is  to
provide   a   discussion   platform   for   researchers  and
practitioners interested in theoretical and applied  aspects
of  neurocomputing.  The  workshop  should be of interest to
computer scientists and engineers, as well as to biologists,
cognitive   scientists   and   others   interested   in  the
application of neural networks.

This is the first of a series of workshops and seminars with
the  objective  of  enhancing  collaboration  between neural
network  researchers  and  practitioners  in  Queensland.  A
second workshop is planned for the end of July.

The First Brisbane Neural Network Workshop will be  held  at
Queensland  University  of  Technology, Gardens Point Campus
(ITE 303) on June 4, 1993 from 8:00am to 6:00pm.


 Programme

 8:00-8:15
 Welcome
 Joachim Diederich, QUT-FIT-CS Neurocomputing

 8:15-8:45
 Janet Wiles, University of Queensland,
 Departments of Computer Science and Psychology
 Representations in hidden unit space

 8:45-9:15
 Paul Bakker, University of Queensland,
 Departments of Computer Science and Psychology
 Examining Learning Dynamics with the Hyperplane Animator

 9:15-9:45
 Simon Dennis, University of Queensland
 Department of Computer Science
 Introducing Learning into Models of Human Memory

 9:45-10:15
 Steven Phillips, University of Queensland
 Department of Computer Science
 Systematicity and Feedforward Networks: Exponential Generalizations
 from Polynomial Examples

 10:15-10:45 Coffee Break

 10:45-11:15
 Joachim Diederich, QUT-FIT-CS Neurocomputing
 Cows,  Bulls  &  Tarzan:  Preliminary  results  on   animal
 breeding advice using neural networks

 11:15-11:45
 Joaquin Sitte, QUT-FIT-CS Neurocomputing
 Learning control in simple dynamics systems

 11:45-12:15
 Shlomo Geva, QUT-FIT-CS Neurocomputing
 Constrained gradient descent

 12:15-12:45
 Ray Lister, University of Queensland
 Department of Electrical Engineering
 On Seeing the World in a Grain of Sand: Hidden Unit Self-Organization,
 and Super Criticality

 12:45-2:00 Lunch Break

 2:00-2:30
 David  Abramson,  Griffith University,
 School  of  Computing   and   Information Technology
 High Performance Computation for  Simulated  Annealing  and
 Genetic Algorithms

 2:30-3:00
 John D. Pettigrew, University of Queensland, Vision,  Touch
 & Hearing Research Centre
 The owl & the pussycat: comparative study of the networks
 underlying binocular vision.

 3:00-3:30
 Tom Downs/Ah Chung Tsoi, University of Queensland,
 Department of Electrical Engineering
 Directions of research in the UQ EE department

 3:30-4:00
 David Lovell, University of Queensland,
 Department of Electrical Engineering
 An improved version of the neocognitron

 4:00-4:30 Coffee Break

 4:30-5:00
 Ron Ganier, University of Queensland,
 Department of Electrical Engineering
 Generalization in artificial neural networks

 5:00-5:30
 Paul Murtagh, University of Queensland,
 Department of Electrical Engineering
 Fault tolerance  and  VLSI  design  for  artificial  neural
 networks

 5:30-6:00
 Robert Young, Queensland Department of  Primary  Industries
 QDPI Neural Network Applications


Enquiries should be sent to

Professor Joachim Diederich
Neurocomputing Research Concentration Area
School of Computing Science
Queensland University of Technology
GPO Box 2434
Brisbane Q 4001
Phone: (07) 864-2143
Fax: (07) 864-1801
Email: joachim@fitmail.fit.qut.edu.au

Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa17395; 3 Jun 93 18:33:04 EDT
Received: from Q.CS.CMU.EDU by Q.CS.CMU.EDU id aa09545; 3 Jun 93 17:53:21 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa09541;
          3 Jun 93 17:32:33 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa00236;
          3 Jun 93 17:32:09 EDT
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa10166; 3 Jun 93 3:44:32 EDT
Received: from st2.hq.eso.org by CS.CMU.EDU id aa18767; 3 Jun 93 3:43:43 EDT
Date: Thu, 3 Jun 93 09:43:22 +0200
From: fmurtagh@eso.org
Message-Id: <9306030743.AA08508@st2.hq.eso.org>
Received: by st2.hq.eso.org (4.1/ eso-1.1)
	id AA08508; Thu, 3 Jun 93 09:43:22 +0200
To: connectionists@cs.cmu.edu
Subject: Announcement: conferences calendar available in Neuroprose archive

FTP-host: archive.cis.ohio-state.edu
FTP-file: pub/neuroprose/murtagh.calendar.txt.Z

The file murtagh.calendar.txt.Z is available for copying from the Neuroprose 
repository.

It is a CALENDAR of forthcoming conferences and workshops in the neural net
and related fields.  It is about 1300 lines in length, consists of brief 
details (date, title, location, contact), and is valid from mid-May 1993 
onwards.  The intention is to update it in about 3 months.

F. Murtagh (fmurtagh@eso.org)





Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa29651; 4 Jun 93 21:26:14 EDT
Received: from Q.CS.CMU.EDU by Q.CS.CMU.EDU id aa13689; 4 Jun 93 20:46:38 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa13681;
          4 Jun 93 20:25:13 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa03965;
          4 Jun 93 20:24:19 EDT
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa22190; 4 Jun 93 8:15:03 EDT
Received: from sally.Informatik.RWTH-Aachen.DE by CS.CMU.EDU id aa25086;
          4 Jun 93 8:14:38 EDT
Received: from urmel.informatik.rwth-aachen.de by sally.informatik.rwth-aachen.de 
        (4.1/sally-2) id AA08514; Fri, 4 Jun 93 14:13:53 +0200 
Received: from  by urmel.informatik.rwth-aachen.de 
        (4.1/urmel-9) id AB08738; Fri, 4 Jun 93 14:13:52 +0200 
Received: From IKARUS/WORKQUEUE by tom.informatik.rwth-aachen.de
          via Charon-4.0-VROOM with IPX id 100.930604141523.736;
          04 Jun 93 14:15:16 -100
Message-Id: <MAILQUEUE-101.930604141521.672@i4.informatik.rwth-aachen.de>
To: Connectionists@cs.cmu.edu
From: sabine@i4.informatik.rwth-aachen.de
Organization: Informatik IV * RWTH Aachen  
Date:     4 Jun 1993 14:16:26 MEZ-1
Subject:  Workshop on Neural Networks at Aachen, Germany
Priority: normal
X-Mailer: Pegasus Mail/Mac v2.02



CALL FOR PARTICIPATION


"LECTURES AND WORKSHOP 
 ON 
 NEURAL NETWORKS AACHEN '93"


Aachen University of Technology
D - 52056 Aachen, Germany

Introductory Lectures June 21-30 1993
Workshop July 12-13 1993

The first Workshop on Neural Networks at Aachen intends 
to convey ideas on neural methods to a wide audience 
interested in neurocomputing and neurocomputers. 
The 15 distinguished invited speakers will cover topics 
that range from biological issues and the modelling of 
consciousness to neurocomputers. The workshop will 
be complemented by a poster session presenting research 
projects at Aachen University in the field of neural networks.



COMMITEE

Honorary Chairman:
Prof. I. Aleksander, Imperial College, London

Prof. Dr. rer. nat. O. Spaniol
Forum Informatik, Graduate College 
"Methods and tools of computer science 
and their application in technical systems", 
Aachen University of Technology, D-52056 Aachen, Germany

Neural Networks Special Interest Group INN
Harald Huening, Sabine Neuhauser, Michael Raus, 
Wolf Ritschel, Christiane Schmidt



FINAL PROGRAMME


INTRODUCTORY LECTURES JUNE 21-30, 1993:

June 21, 93, 5 pm, AH III
Prof. E.J.H. Kerckhoffs, Delft University of Technology (NL) 
"An Introduction to Neural Computing"

June 22, 93, 5 pm, AH II
Prof. C. von der Malsburg, Ruhr-Universitaet Bochum (D)
"Neural Networks and the Brain" (German language)

June 25, 93, 2 pm, AH IV
Dr. U. Ramacher, Siemens AG, Munich (D)
"A Computer-Architecture for the Simulation of Artificial Neural Networks 
and the further Development of Neurochips" (German language)

June 30, 93, 5 pm, GH 3, Klinikum
Prof. V. Braitenberg, Max-Planck-Institut Tuebingen (D)
"New Ideas about the Function of the Cerebellum" (German language)


WORKSHOP PROGRAMME, JULY 12, 1993, 9:00 AM - 5:30 PM (AULA II)

9:00 - 9:30 am
Welcome and Introduction

9:30 - 10:15 am
Prof. I. Aleksander, Imperial College, London (UK)
"Iconic State Machines and their Cognitive Properties"

10:15 - 10:45 am
Coffee Break, Poster Session

10:45 - 11:30 am
Prof. E.J.H. Kerckhoffs, Delft University of Technology (NL)
"Thoughts on Conjoint Numeric, Symbolic and Neural Computing"

11:30 am - 12:15 pm
Dr. P. DeWilde, Imperial College, London (UK)
"Reduction of Representations and the Modelling of Consciousness"

12:15 - 2:00 pm
Lunch Break, Poster Session

2:00 - 2:45 pm
Dr. M. Erb, Philipps-Universitaet Marburg (D)
"Synchronized Activity in Biological and Artificial Dynamic Neural Networks: 
Experimental Results and Simulations"

2:45 - 3:30 pm
drs. E. Postma, University of Limburg, Maastricht (NL)
"Towards Scalable Neurocomputers"

3:30 - 4:00 pm
Coffee Break, Poster Session

4:00 - 4:45 pm
J. Heemskerk, Leiden University, Leiden (NL)
"Neurocomputers: Design Principles for a Brain"

4:45 - 5:30 pm
Prof. U. Rueckert, Technical University of Hamburg-Harburg, (D)
"Microelectronic Implementation of Neural Networks"


WORKSHOP PROGRAMME, JULY 14, 1993, 9:00 AM - 3:00 PM (AULA II):

9:00 - 9:45 am
Dr. J. H. Schmidhuber, Technical University of Munich (D)
"Continuous History Compression"

9:45 - 10:30 am
K. Weigl, INRIA, Sophia-Antipolis (F)
"Metric Tensors and Non-orthogonal Functional Bases"

10:30 - 11:00 am
Coffee Break, Poster Session

11:00 - 11:45 am
Dr. F. Castillo, Univ. Politecnica de Catalunya, Barcelona (E)
"Statistics and Neural Network Classifiers: 
A Review from Multilayered Perceptrons to Incremental Neural Networks"

11:45 am - 1:30 pm
Lunch Break, Poster Session

1:30 - 2:15 pm
Dr. J. Mrsic-Floegel, Imperial College, London (UK)
"A Review of RAM-based Weightless Nodes"

2:15 - 3:00 pm
J. Schaefer, Aachen University of Technology (D)
"Neural Networks and Fuzzy Technologies"


LOCATIONS

The workshop lectures will be performed at the lecture-hall 
    Aula II, Aachen University of Technology, 
    Ahornstrasse 55, D-52074 Aachen, Germany.
The introductory lectures are performed in one of the following lecture-halls,
as indicated in the programme: 
    AH II, AH III, AH IV, Ahornstrasse 55, D-52074 Aachen, Germany and
    GH3, Klinikum Aachen, Pauwelstrasse, D- 52074 Aachen, Germany.
Ahornstrasse can be reached by bus routes no. 23 or 33:
- bus route 33 to "Klinikum" or "Vaals", stop at "Paedagogische Hochschule";
- bus route 23 to "Hoern", stop at "Paedagogische Hochschule".
Klinikum can be reached by bus route 33 as well, stop at "Klinikum".
To reach bus routes 23 or 33, take a bus from the station to "Bushof".


PARTICIPATION

is free of charge. Please register by e-mail to the organizing commitee:
Harald Huening: harry@dfv.rwth-aachen.de
Sabine Neuhauser: sabine@informatik.rwth-aachen.de
Michael Raus: raus@rog1.rog.rwth-aachen.de
Wolf Ritschel: ri@mtq03.wzl.rwth-aachen.de


PROCEEDINGS:

    H. Huening, S. Neuhauser, M. Raus, W. Ritschel (eds.):
    "Workshop on Neural networks at RWTH Aachen",
    Aachener Beitraege zur Informatik ABI, Band 2,
    Verlag der Augustinus Buchhandlung, 227 pages

contain the articles of the workshop + the article 
"Am I Thinking Assemblies ?" of Prof. C. von der Malsburg.
Proceedings can be ordered from

    Augustinus Buchhandlung
    Pontstrasse 66/68
    D-52062 Aachen

at a price of 36.- DM plus postal coverage and postage 
(about 3.-DM within Germany). During the Workshop 
the book will be sold at a reduced price by Augustinus 
bookstore.


LANGUAGE
English will be the official conference language.


sabine@informatik.rwth-aachen.de
_______________________________________________________

Sabine Neuhauser
Aachen University of Technology
Computer Science Department (Informatik IV)
Ahornstrasse 55, W- 5100 Aachen, Germany

!!! please note the new postal code for 
!!! Aachen University of Technology 
!!! valid from 1.7.93 : D-52056 Aachen (postal address)






Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa00809; 5 Jun 93 2:44:25 EDT
Received: from Q.CS.CMU.EDU by Q.CS.CMU.EDU id aa13737; 4 Jun 93 21:04:21 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa13692;
          4 Jun 93 20:26:58 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa03972;
          4 Jun 93 20:25:13 EDT
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa26341; 4 Jun 93 15:26:14 EDT
Received: from moose.cs.indiana.edu by EDRC.CMU.EDU id aa29070;
          4 Jun 93 15:25:19 EDT
Received: by moose.cs.indiana.edu
	(5.65c/9.4jsm) id AA04721; Fri, 4 Jun 1993 14:25:14 -0500
Date: Fri, 4 Jun 1993 14:25:14 -0500
From: Michael Gasser <gasser@cs.indiana.edu>
To: connectionists@cs.cmu.edu
Cc: gasser@cs.indiana.edu
Subject: Paper on lexical acquisition

FTP-host: cs.indiana.edu (129.79.254.191)
FTP-filename: /pub/techreports/TR382.ps.Z

The following report is available in compressed postscript form by
anonymous ftp from the site given above (note: NOT neuroprose).  The
paper is 23 pages long.  If you have trouble printing it out, please
contact me.

Michael Gasser
gasser@cs.indiana.edu

=================================================================

		Learning Noun and Adjective Meanings:
		       A Connectionist Account

			    Michael Gasser
	     Computer Science and Linguistics Departments

			    Linda B. Smith
			Psychology Department

			  Indiana University

			       Abstract

   Why do children learn nouns such as {\it cup\/} faster than
dimensional adjectives such as {\it big\/}?  Most explanations of this
well-known phenomenon rely on prior knowledge in the child of the
noun-adjective distinction or on the logical priority of nouns as the
arguments of predicates.  In this paper we examine an alternative
account, one which seeks to explain the relative ease of nouns over
adjectives in terms of the response of the learner to various
properties of the semantic categories to be learned and of the word
learning task itself.  We isolate four such properties: the relative
size and the relative compactness of the regions in representational
space associated with the categories, the presence or absence of
lexical dimensions in the linguistic context of a word ({\it what
color is it?\/} vs. {\it what is it?\/}), and the number of words of a
particular type to be learned.  In a set of five experiments, we
trained a simple connectionist categorization device to label input
objects, in particular linguistic contexts, as nouns or adjectives.
We show that, for the network, the first three of the above properties
favor the more rapid learning of nouns, while the fourth favors the
more rapid learning of adjectives.  Our experiments demonstrate that
the advantage for nouns over adjectives does not require prior
knowledge of the distinction between nouns and adjectives and suggest
that this distinction may instead emerge as the child learns to
associate the different properties of noun and adjective categories
with the different morphosyntactic contexts which elicit them.



Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa17166; 6 Jun 93 23:51:24 EDT
Received: from Q.CS.CMU.EDU by Q.CS.CMU.EDU id aa19422; 6 Jun 93 22:31:07 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa19412;
          6 Jun 93 22:05:48 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa07298;
          6 Jun 93 22:05:33 EDT
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa22895; 4 Jun 93 9:36:09 EDT
Received: from sally.Informatik.RWTH-Aachen.DE by CS.CMU.EDU id aa25426;
          4 Jun 93 9:35:31 EDT
Received: from urmel.informatik.rwth-aachen.de by sally.informatik.rwth-aachen.de 
        (4.1/sally-2) id AA09170; Fri, 4 Jun 93 15:34:49 +0200 
Received: from tom.informatik.rwth-aachen.de by urmel.informatik.rwth-aachen.de 
        (4.1/urmel-9) id AA10745; Fri, 4 Jun 93 15:34:51 +0200 
Received: From IKARUS/WORKQUEUE by tom.informatik.rwth-aachen.de
          via Charon-4.0-VROOM with IPX id 100.930604153635.416;
          04 Jun 93 15:36:12 -100
Message-Id: <MAILQUEUE-101.930604153624.384@i4.informatik.rwth-aachen.de>
To: connectionists-request@cs.cmu.edu
From: sabine@i4.informatik.rwth-aachen.de
Organization: Informatik IV * RWTH Aachen  
Date:     4 Jun 1993 15:37:29 MEZ-1
Subject:  workshop on neural networks, Aachen 93
Priority: normal
X-Mailer: Pegasus Mail/Mac v2.02

Dear organizers of this list,

I've just sent an announcement about the Lectures and Workshop on
Neural Networks at Aachen '93 to the connectionists-address.
The dates for this workshop are June 21-30 for the Introductory Lectures
July 12-13 1993 for the Workshop. 
Unfortunately, I've mentioned a wrong date for the second day of the
workshop: in the second "Workshop programme..." header, I've
mentioned the date "July, 14" instead of "July, 13". I'd be very pleased,
if you could change this before posting it to the whole list.

Thanks in advance, I'm really sorry for that mistake,

Sabine Neuhauser

sabine@informatik.rwth-aachen.de
_______________________________________________________

Sabine Neuhauser
Aachen University of Technology
Computer Science Department (Informatik IV)
Ahornstrasse 55, W- 5100 Aachen, Germany

!!! please note the new postal code for 
!!! Aachen University of Technology 
!!! valid from 1.7.93 : D-52056 Aachen (postal address)






Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa23703; 7 Jun 93 15:00:24 EDT
Received: from Q.CS.CMU.EDU by Q.CS.CMU.EDU id aa22035; 7 Jun 93 14:33:01 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa22033;
          7 Jun 93 14:07:06 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa09546;
          7 Jun 93 14:06:37 EDT
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id au21372; 7 Jun 93 11:47:47 EDT
Received: from [129.73.1.15] by EDRC.CMU.EDU id aa05079; 7 Jun 93 11:23:13 EDT
Received: from learning.siemens.com by siemens.siemens.com with smtp
	(Smail3.1.28.1 #22) id m0o2j23-0019GFC; Mon, 7 Jun 93 11:22 EDT
Received: from gull.siemens.com by learning.siemens.com (4.1/SMI-4.1)
	id AA14862; Mon, 7 Jun 93 11:22:41 EDT
From: Barak Pearlmutter <bap@learning.siemens.com>
Received: by gull.siemens.com 
        (4.1//ident-1.0) id AA17817; Mon, 7 Jun 93 11:22:41 EDT 
Date: Mon, 7 Jun 93 11:22:41 EDT
Message-Id: <9306071522.AA17817@gull.siemens.com>
To: connectionists@cs.cmu.edu
Subject: Preprint Available
Reply-To: Barak Pearlmutter <bap@learning.siemens.com>
Ftp-Host: archive.cis.ohio-state.edu
Ftp-Filename: /pub/neuroprose/pearlmutter.hessian.ps.Z

I have placed the preprint whose abstract appears below in the
neuroprose archives.  My thanks to Jordan Pollack for providing this
valuable service to the community.

			   ----------------

	       Fast Exact Multiplication by the Hessian
			 Barak A. Pearlmutter

Just storing the Hessian $H$ (the matrix of second derivatives of the
error $E$ with respect to each pair of weights) of a large neural
network is difficult.  Since a common use of a large matrix like $H$
is to compute its product with various vectors, we derive a technique
that directly calculates $Hv$, where $v$ is an arbitrary vector.  To
calculate $Hv$, we first define a differential operator $R{f(w)} =
(d/dr) f(w+rv) |_{r=0}$, note that $R{dE/dw} = Hv$ and $R{w} = v$,
and then apply $R{}$ to the equations used to compute $dE/dw$.  The
result is an exact and numerically stable procedure for computing
$Hv$, which takes about as much computation, and is about as local, as
a gradient evaluation.  We then apply the technique to a one pass
gradient calculation algorithm (backpropagation), a relaxation
gradient calculation algorithm (recurrent backpropagation), and two
stochastic gradient calculation algorithms (Boltzmann Machines and
weight perturbation).  Finally, we show that this technique can be
used at the heart of many iterative techniques for computing various
properties of $H$, obviating any need to calculate the full Hessian.

[12 pages; 42k; pearlmutter.hessian.ps.Z; To appear in Neural Computation]

Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa26689; 7 Jun 93 21:39:37 EDT
Received: from Q.CS.CMU.EDU by Q.CS.CMU.EDU id aa23104; 7 Jun 93 20:18:11 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa23101;
          7 Jun 93 19:59:06 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa10370;
          7 Jun 93 19:58:04 EDT
Received: from MAILBOX.SRV.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa18663;
          7 Jun 93 5:04:09 EDT
Received: from vax2.sara.nl by MAILBOX.SRV.CS.CMU.EDU id aa14049;
          7 Jun 93 5:01:57 EDT
Received: from SARA.NL by SARA.NL for connectionists@MAILBOX.SRV.CS.CMU.EDU;
           7 Jun 93 11:02 MET
Received: from ALF.LET.UVA.NL by SARA.NL with PMDF#10201; Mon, 7 Jun 1993 10:43
 MET
Date: Mon, 7 Jun 93 10:41 MET
From: SCHOLTES@ALF.LET.UVA.NL
Subject: PhD Dissertation available
To: Connectionists@cs.cmu.edu

===================================================================
       As I had to disapoint many people because I run out of
       copies in the first batch, a high-quality reprint has
       been made from.......................................

                 ........REPRINT........


                Ph.D. DISSERTATION AVAILABLE

                           on

Neural Networks, Natural Language Processing, Information Retrieval

                292 pages and over 350 references

===================================================================

A Copy of the dissertation "Neural Networks in Natural Language Processing
and Information Retrieval" by Johannes C. Scholtes can be obtained for
cost price and fast airmail- delivery at US$ 25,-.

Payment by Major Creditcards (VISA, AMEX, MC, Diners) is accepted and
encouraged. Please include Name on Card, Number and Exp. Date. Your Credit
card will be charged for Dfl. 47,50.

Within Europe one can also send a Euro-Cheque for Dfl. 47,50 to:

(include 4 or 5 digit number on back of cheque!)

    University of Amsterdam
    J.C. Scholtes
    Dufaystraat 1
    1075 GR Amsterdam
    The Netherlands
    scholtes@alf.let.uva.nl


Do not forget to mention a surface shipping address. Please allow 2-4
weeks for delivery.


                            Abstract

1.0  Machine Intelligence

For over fifty years the two main directions in machine intelligence (MI),
neural networks (NN) and artificial intelligence (AI), have been studied
by various persons with many dfferent backgrounds. NN and AI seemed
to conflict with many of the traditional sciences as well as with each other.
The lack of a long research history and well defined foundations
has always been an obstacle for the general acceptance of machine
intelligence by other fields.

At the same time, traditional schools of science such as mathematics and
physics developed their own tradition of new or "intelligent" algorithms.
Progress made in the field of statistical reestimation techniques such as the
Hidden Markov Models (HMM) started a new phase in speech recognition.
Another application of the progress of mathematics can be found in the
application of the Kalman filter in the interpretation of sonar and radar
signals. Much more examples of such "intelligent" algorithms can be found in
the statistical classification en filtering techniques of the study of
pattern recognition (PR).

Here, the field of neural networks is studied with that of pattern
recognition in mind. Although only global qualitative comparisons are made,
the importance of the relation between them is not to be underestimated. In
addition it is argued that neural networks do indeed add something to the
fields of MI and PR, instead of competing or conflicting with them.

2.0  Natural Language Processing

The study of natural language processing (NLP) exists even longer than that
of MI. Already in the beginning of this century people tried to analyse
human language with machines. However, serious efforts had to wait until
the development of the digital computer in the 1940s, and even then,
the possibilities were limited. For over 40 years, symbolic AI has been the
most important approach in the study of NLP. That this has not always
been the case, may be concluded from the early work on NLP by Harris. As a
matter of fact, Chomsky's Syntactic Structures was an attack on the lack of
structural proper-ties in the mathematical methods used in those days. But,
as the latter's work remained the standard in NLP, the former has been
forgotten completely until recently. As the scientific community in NLP
devoted all its attention to the symbolic AI-like theories, the only use-
ful practical implementation of NLP systems were those that were based on
statistics rather than on linguistics. As a result, more and more scientists
are redirecting their attention towards the statistical techniques a
vailable in NLP. The field of connectionist NLP can be considered as a
special case of these mathematical methods in NLP.

More than one reason can be given to explain this turn in approach. On the
one hand, many problems in NLP have never been addressed properly by
symbolic AI. Some examples are robust behavior in noisy environments,
disambiguation driven by different kinds of knowledge, commensense
generalizations, and learning (or training) abilities. On the other hand,
mathematical methods have become much stronger and more sensitive to spe-
cific properties of language such as hierarchical structures.

Last but not least, the relatively high degree of success of mathematical
techniques in commercial NLP systems might have set the trend towards the
implementation of simple, but straightforward algorithms.

In this study, the implementation of hierarchical structures and semantical
features in mathematical objects such as vectors and matrices is given much
attention. These vectors can then be used in models such as neural networks,
but also in sequential statistical procedures implementing similar
characteristics.

3.0  Information Retrieval

The study of information retrieval (IR) was traditionally related to
libraries on the one hand and military applications on the other. However,
as PC's grew more popular, most common users loose track of the data they
produced over the last couple of years. This, together with the introduction
of various "small platform" computer programs made the field of IR relevant
to ordinary users.

However, most of these systems still use techniques that have been developed
over thirty years ago and that implement nothing more than a global
surface analysis of the textual (layout) properties. No deep structure
whatsoever, is incorporated in the decision whether or not to retrieve a
text.

There is one large dilemma in IR research. On the one hand, the data
collections are so incredibly large, that any method other than a global
surface analysis would fail. On the other hand, such a global analysis could
never implement a contextually sensitive method to restrict the number of
possible candidates returned by the retrieval system. As a result, all
methods that use some linguistic knowledge exist only in laboratories and
not in the real world. Conversely, all methods that are used in the real
world are based on technological achievements from twenty to thirty
years ago.

Therefore, the field of information retrieval would be greatly indebted
to a method that could incorporate more context without slowing down. As
computers are only capable of processing numbers within reasonable time
limits, such a method should be based on vectors of numbers rather than
on symbol manipulations. This is exactly where the challenge is: on the
one hand keep up the speed, and on the other hand incorporate more context.
If possible, the data representation of the contextual information must not
be restricted to a single type of media. It should be possible to
incorporate symbolic language as well as sound, pictures and video
concurrently in the retrieval phase, although one does not know exactly
how yet...

Here, the emphasis is more on real-time filtering of large amounts of
dynamic data than on document retrieval from large (static) data bases.
By incorporating more contextual information, it should be possible to
implement a model that can process large amounts of unstructured text
without providing the end-user with an overkill of information.

4.0  The Combination

As this study is a very multi-disciplinary one, the risk exists that it
remains restricted to a surface discussion of many different problems
without analyzing one in depth. To avoid this, some central themes,
applications and tools are chosen. The themes in this work are self-
organization, distributed data representations and context. The
applications are NLP and IR, the tools are (variants of) Kohonen feature
maps, a well known model from neural network research.

Self-organization and context are more related to each other than one may
suspect. First, without the proper natural context, self-organization shall
not be possible. Next, self-organization enables one to discover contextual
relations that were not known before.

Distributed data representation may solve many of the unsolved problems in
NLP and IR by introducing a powerful and efficient knowledge integration
and generalization tool. However, distributed data representation and
self-organization trigger new problems that should be solved in an
elegant manner.

Both NLP and IR work on symbolic language. Both have properties in common
but both focus on different features of language. In NLP hierarchical
structures and semantical features are important. In IR the amount of
data sets the limitations of the methods used. However, as computers grow
more powerful and the data sets get larger and larger, both approaches
get more and more common ground. By using the same models on both
applications, a better understanding of both may be obtained.

Both neural networks and statistics would be able to implement
self-organization, distributed data and context in the same manner.
In this thesis, the emphasis is on Kohonen feature maps rather than on
statistics. However, it may be possible to implement many of the
techniques used with regular sequential mathematical algorithms.

So, the true aim of this work can be formulated as the understanding of
self-organization, distributed data representation, and context in NLP and
IR, by in depth analysis of Kohonen feature maps.


==============================================================================


Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id ab15637; 9 Jun 93 13:53:51 EDT
Received: from Q.CS.CMU.EDU by q.cs.CMU.EDU id aa02479; 9 Jun 93 13:52:43 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa02476;
          9 Jun 93 13:23:06 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa00550;
          9 Jun 93 13:22:13 EDT
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa05990; 8 Jun 93 15:09:01 EDT
Received: from arapaho.UCSC.EDU by EDRC.CMU.EDU id aa09336;
          8 Jun 93 14:11:22 EDT
Received: by arapaho.ucsc.edu id AA25547
  (5.65c/IDA-1.4.4 for Connectionists@cs.cmu.edu); Tue, 8 Jun 1993 11:11:16 -0700
Date: Tue, 8 Jun 1993 11:11:16 -0700
From: David Haussler <haussler@cse.ucsc.edu>
Message-Id: <199306081811.AA25547@arapaho.ucsc.edu>
To: Connectionists@cs.cmu.edu
Subject: COLT `93: Early registration deadline June 15


                     COLT '93     
   Sixth ACM Conference on Computational Learning Theory   
   Monday, July 26 through Wednesday, July 28, 1993   
   University of California, Santa Cruz, California   
 
EARLY REGISTRATION DEADLINE: JUNE 15

   The workshop will be held on campus, which is hidden away in the  
redwoods on the Pacific coast of Northern California. The workshop is 
sponsored by the ACM Special Interest Group on Algorithms and Computation 
Theory (SIGACT) and the ACM Special Interest Group on Artificial 
Intelligence (SIGART).   The long version of this document is available
by anonymous ftp from ftp.cse.ucsc.edu.  To ftp the document you
do the following:  step 1) ftp ftp.cse.ucsc.edu, and login as "anonymous",
2) cd pub/colt, 3) binary, 4) get colt93.registration.ps.


REGISTRATION INFORMATION 
------------------------

Please fill in the information needed on the registration sheet
Make your payment by check or international money order, 
in U.S. dollars and payable through a U.S. bank, to COLT '93.
Mail the form together with payment (by June 15 to avoid the late fee) to:

  COLT '93 
  Dept. of Computer Science 
  University of California 
  Santa Cruz, California  95064 


ACCOMMODATIONS AND DINING 

Accommodation fees are  $57 per person for a double and $70 for a single 
per night at the College Eight Apartments. Cafeteria style breakfast 
(7:45 to 8:30am), lunch (12:30 to 1:30pm), and  dinner (6:00 to 7:00pm) 
will be served in the College Eight Dining Hall.  Doors close at the 
end of the time indicated, but dining may continue beyond this time.  
The first meal provided is dinner on the day of arrival and the last 
meal is lunch on the day you leave.  NO REFUNDS can be given after June 15.  
Those with uncertain plans should make reservations at an off-campus hotel.
Each attendee should pick one of the five accomdation packages.

For shorter stays, longer stays, and other special requirements, you can get 
other accommodations through the Conference Office.  Make reservations 
directly with them at (408)459-2611, fax (408)459-3422, and do this soon 
as on-campus rooms for the summer fill up well in advance.  Off-campus 
hotels include the Dream Inn (408)426-4330 and the Holiday Inn (408)426-7100.
 
Questions:  e-mail colt93@cse.ucsc.edu, fax (408)429-4829.  
Confirmations will be sent by e-mail.  Anyone needing special arrangements 
to accommodate a disability should enclose a note with their registration.
If you don't receive confirmation within three weeks of payment, let us know.
Get updated versions of this document by anonymous ftp from 
ftp.cse.ucsc.edu.


 
CONFERENCE REGISTRATION FORM   (see accompanying information for details)

Name:         ___________________________________

Affiliation:  ___________________________________

Address:      ___________________________________

City:         ________________  State: ____________  Zip: ________________ 

Country:      ____________________  

Telephone:    (____) ________________

Email:         ________________________   

The registration fee includes a copy of the proceedings.  

ACM/SIG Members:    $165  (with banquet)                  $___________
Non-Members:        $185  (with banquet)                  $___________
Late:               $220  (postmarked after June 15)      $___________
Full time students:  $80  (no banquet)                    $___________
Extra banquet tickets: ___ (quantity) x  $18 =            $___________

How many in your party have dietary restrictions?    

Vegetarian: ___________  Other:  ___________      

Shirt size, please circle one:      small   medium   large   x-large      

ACCOMODATIONS:  Pick one package:
 
_____ Package 1: Sun, Mon, Tue nights:            $171 double, $210 single.
_____ Package 2: Sat, Sun, Mon, Tue nights:       $228 double, $280 single. 
_____ Package 3: Sun, Mon, Tues, Wed nights:      $228 double, $280 single.
_____ Package 4: Sat, Sun, Mon, Tue, Wed nights:  $285 double, $350 single.
______Other housing arrangement.  

Each 4-person apartment  has a living room, a kitchen, two common bathrooms, 
and either four single separate rooms, two double rooms, or two single and 
one double room.  We need the following information to make room assignments: 

Gender (M/F):  __________      Smoker (Y/N):  __________

Roommate Preference:  ____________________

AMOUNT ENCLOSED: 
  Registration        $___________________
  Banquet tickets     $___________________
  Accommodations      $___________________
  TOTAL               $___________________

Mail this form together with payment (by June 15 to avoid the late fee) to:
COLT '93, Dept. of Computer Science, Univ. California, Santa Cruz, CA 95064 



       COLT '93 --- Conference Schedule     
   Sixth ACM Conference on Computational Learning Theory   
   Monday, July 26 through Wednesday, July 28, 1993   
   University of California, Santa Cruz, California   
 
 

SUNDAY, JULY 25 

   4:00 - 6:00 pm, Housing Registration, College Eight Satellite Office. 

   7:00 - 10:00 pm, Reception, Oakes Learning Center.  
   Preregistered attendees may check in at the reception.

   Note:   All technical sessions will take place in    Oakes 105 .

 

MONDAY, JULY 26 

Session 1:  Learning with Queries  
Chair: Dana Angluin 

8:20-8:40     
   Learning Sparse Polynomials over Fields with Queries and Counterexamples.  
   Robert E. Schapire and Linda M. Sellie

8:40-9:00
   Learning   Branching Programs with Queries.  
   Vijay Raghavan and Dawn Wilkins

9:00-9:10
   Linear Time Deterministic Learning of k-term DNF. 
   Ulf Berggren

9:10-9:30
   Asking Questions to Minimize Errors.  
   Nader H. Bshouty, Sally A. Goldman, Thomas R. Hancock, and Sleiman Matar

9:30-9:40
   Parameterized Learning Complexity.  
   Rodney G. Downey, Patricia Evans, and Michael R. Fellows

9:40-10:00 
   On the Query Complexity of Learning.  
   Sampath K. Kannan

10:00 - 10:30    BREAK 

Session 2: New Learning Models and Problems  
Chair: Sally Goldman 
                          
10:30-10:50
   Teaching a Smarter Learner.  
   Sally A. Goldman and H. David Mathias

10:50-11:00 
   Learning and Robust Learning of Product Distributions. 
   Klaus-U. Hoffgen  

11:00-11:20
   A Model of Sequence Extrapolation.  
   Philip Laird, Ronald Saul and Peter Dunning

11:20-11:30 
   On Polynomial-Time Probably Almost Discriminative Learnability. 
   Kenji Yamanishi

11:30-11:50 
   Learning from a Population of Hypotheses. 
   Michael Kearns and Sebastian Seung

11:50-12:00 
   On Probably Correct Classification of Concepts. 
   S.R. Kulkarni and O. Zeitouni

12:00 - 1:40    LUNCH 


Session 3: Inductive Inference; Neural Nets  
Chair: Bob Daley 

1:40-2:00 
   On the Structure of Degrees of Inferability.  
   Martin Kummer and Frank Stephan

2:00-2:20 
   Language Learning in Dependence on the Space of Hypotheses. 
   Steffen Lange and Thomas Zeugmann

2:20-2:30 
   On the Power of Sigmoid Neural Networks.  
   Joe Kilian and Hava T. Siegelmann

2:30-2:40 
   Lower Bounds on the Vapnik-Chervonenkis Dimension of
   Multi-layer Threshold Networks.  
   Peter L. Bartlett	

2:40-2:50 
   Average Case Analysis of the Clipped Hebb Rule
   for Nonoverlapping Perceptron Networks.  
   Mostefa Golea and Mario Marchand

2:50-3:00 
   On the Power of Polynomial Discriminators and Radial Basis
   Function Networks.  
   Martin Anthony and Sean B. Holden

3:00 - 3:30      BREAK 

 
3:30-4:30   Invited Talk  by Geoffrey Hinton 
            The Minimum Description Length Principle and Neural Networks. 
   
4:45 - ?      Impromptu talks, open problems, etc. 


7:00 - 10:00 pm, Banquet, barbeque pit outside Porter Dining Hall. 

 

TUESDAY, JULY 27 

Session 4:  Inductive Inference  
Chair: Rolf Wiehagen 


8:20-8:40    
   The Impact of Forgetting on Learning Machines. 
    Rusins Freivalds, Efim Kinber, and Carl H. Smith

8:40-8:50    
   On Parallel Learning. 
   Efim Kinber, Carl H. Smith, Mahendran Velauthapillai, and Rolf Wiehagen

8:50-9:10    
   Capabilities of Probabilistic Learners with Bounded Mind Changes.  
   Robert Daley and Bala Kalyanasundaram

9:10-9:20    
   Probability is More Powerful than Team for Language
   Identification from Positive Data.  
   Sanjay Jain and Arun Sharma

9:20-9:40    
   Capabilities of Fallible FINite Learning.  
   Robert Daley, Bala Kalyanasundaram, and Mahendran Velauthapillai

9:40-9:50    
   On Learning in the Limit and Non-uniform (epsilon, delta)-Learning. 
   Shai Ben-David and Michal Jacovi

9:50 - 10:20     BREAK 


Session 5:  Formal Languages, Rectangles, and Noise  
Chair: Takeshi Shinohara 


10:20-10:40 
   Learning Fallible Deterministic Finite Automata. 
   Dana Ron and Ronitt Rubinfeld

10:40-11:00    
   Learning Two-Tape Automata from Queries and Counterexamples. 
   Takashi Yokomori

11:00-11:10    
   Efficient Identification of Regular Expressions from Representative 
   Examples.  Alvis Brazma

11:10-11:30    
   Learning Unions of Two Rectangles in the Plane with Equivalence Queries. 
   Zhixiang Chen

11:30-11:50 
   On-line Learning of Rectangles in Noisy Environments. 
   Peter Auer

11:50-12:00
   Statistical Queries and Faulty PAC Oracles.  
   Scott Evan Decatur
 

12:00 - 1:40     LUNCH 

 
Session 6: New Models; Linear Thresholds  
Chair: Wray Buntine 

1:40-2:00 
    Learning an Unknown Randomized Algorithm from its Behavior. 
    William Evans, Sridhar Rajagopalan, and Umesh Vazirani

2:00-2:20 
    Piecemeal Learning of an Unknown Environment. 
    Margrit Betke, Ronald L. Rivest, and Mona Singh

2:20-2:40
    Learning with Restricted Focus of Attention.  
    Shai Ben-David and Eli Dichterman

2:40-2:50
    Polynomial Learnability of Linear Threshold Approximations. 
    Tom Bylander

2:50-3:00 
    Rate of Approximation Results Motivated by Robust Neural Network Learning. 
    Christian Darken, Michael Donahue, Leonid Gurvits, and Eduardo Sontag

3:00-3:10
    On the Average Tractability of Binary Integer Programming and the 
    Curious Transition to Generalization in Learning Majority Functions.  
    Shao C. Fang and Santosh S. Venkatesh
 
3:10 - 3:30     BREAK 

 
3:30-4:30  Invited Talk  by John Grefenstette 
           Genetic Algorithms and Machine Learning 


4:45 - ?      Impromptu talks, open problems, etc. 

7:00 - 8:30    Poster Session and Dessert   
               Oakes Learning Center

8:30 - 10:00   Business Meeting  
               Oakes 105

 
WEDNESDAY, JULY 28 
 

Session 7:  Pac Learning  
Chair: Yishay Mansour 


8:20-8:40
    On Learning Visual Concepts and DNF Formulae. 
    Eyal Kushilevitz and Dan Roth

8:40-9:00
    Localization vs. Identification of Semi-Algebraic Sets. 
    Shai Ben-David and Michael Lindenbaum

9:00-9:20
    On Learning Embedded Symmetric Concepts.  
    Avrim Blum,  Prasad Chalasani, and Jeffrey Jackson

9:20-9:30
    Amplification of Weak Learning Under the Uniform Distribution. 
    Dan Boneh and Richard J. Lipton

9:30-9:50
    Learning   Decision Trees on the Uniform Distribution. 
    Thomas R. Hancock

9:50 - 10:20   BREAK 

Session 8: VC dimension, Learning Complexity, and Lower Bounds  
Chair: Sebastian Seung 


10:20-10:40
    Bounding the Vapnik-Chervonenkis Dimension of Concept Classes
    Parameterized by Real Numbers.  
    Paul Goldberg and Mark Jerrum

10:40-10:50 
   Occam's Razor for Functions. 
   B.K. Natarajan

10:50-11:00
    Conservativeness and Monotonicity for Learning Algorithms. 
    Eiji Takimoto and Akira Maruoka

11:00-11:20
    Lower Bounds for PAC Learning with Queries. 
    Gyorgy Turan 

11:20-11:40 
    On the Complexity of Function Learning.  
    Peter Auer, Philip M. Long, Wolfgang Maass, and Gerhard J. Woeginger

11:40-12:00
    General Bounds on the Number of Examples Needed for Learning 
     Probabilistic Concepts.  
    Hans Ulrich Simon

NOON:  Check-out of Rooms 
 
12:00 - 1:40   LUNCH


Session 9: On-Line Learning  
Chair: Kenji Yamanishi 


1:40-2:00 
    On-line Learning with Linear Loss Constraints.  
    Nick Littlestone and Philip M. Long

2:00-2:10
    The `Lob-Pass' Problem and an On-line Learning Model of Rational Choice.  
    Naoki Abe and Jun-ichi Takeuchi

2:10-2:30
    Worst-case Quadratic Loss Bounds for a Generalization of the 
    Widrow-Hoff Rule. 
    Nicolo Cesa-Bianchi, Philip M. Long, and Manfred K. Warmuth

2:30-2:40
    On-line Learning of Functions of Bounded Variation under 
    Various Sampling Schemes.  
   S.E. Posner and  S.R. Kulkarni

2:40-2:50
    Acceleration of Learning in Binary Choice Problems. 
    Yoshiyuki Kabashima and Shigeru Shinomoto

2:50-3:10
    Learning Binary Relations Using Weighted Majority Voting. 
    Sally A. Goldman and Manfred K. Warmuth

 
3:10     CONFERENCE ENDS  

3:10 - ?   Last fling on the Boardwalk.




Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa18299; 9 Jun 93 18:38:33 EDT
Received: from Q.CS.CMU.EDU by Q.CS.CMU.EDU id aa02599; 9 Jun 93 14:12:50 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id ab02476;
          9 Jun 93 13:23:42 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa00555;
          9 Jun 93 13:22:35 EDT
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa10033; 8 Jun 93 21:54:07 EDT
Received: from ua1vm.ua.edu by CS.CMU.EDU id aa16000; 8 Jun 93 21:52:56 EDT
Received: from comec4.mh.ua.edu by UA1VM.UA.EDU (IBM VM SMTP V2R2) with TCP;
   Tue, 08 Jun 93 20:45:55 CDT
Received: by comec4.mh.ua.edu (AIX 3.2/UCB 5.64/4.03)
          id AA13062; Tue, 8 Jun 1993 20:35:30 -0500
Message-Id: <9306090135.AA13062@comec4.mh.ua.edu>
To: ml@ics.uci.edu, psych%tcsvm.bitnet@cunyvm.cuny.edu, 
    news-announce-conferences@uunet.uu.net, neuron@hplabs.hpl.hp.com, 
    biosci@largo.ig.com, ETHOLOGY%FINHUTC.BITNET@cunyvm.cuny.edu, 
    neuro-evolution@cse.ogi.edu, alife@cognet.ucla.edu, 
    neuron@hplabs.HPL.HP.COM, biosci%net.bio.net@vm1.nodak.edu, 
    ga-list@AIC.NRL.NAVY.MIL, Connectionists@cs.cmu.edu, 
    simulation@BIKINI.CIS.UFL.EDU
Subject: ICGA workshop proposal/participation request
Date: Tue, 08 Jun 93 20:35:30 -0600
From: "Robert Elliott Smith.dat" <rob@comec4.mh.ua.edu>


		     Call for Workshop Proposals
		      and Workshop Participation
				   
				   
			       ICGA-93
				   
		The Fifth International Conference on
			  Genetic Algorithms
				   
			   17-21 July, 1993
		      University of Illinois at
			   Urbana-Champaign




Early this Spring, the organizers of ICGA solicited proposals for workshops.
Proposals for six workshops have been received and accepted thus far.
These workshops are listed below.
ICGA attendees are encouraged to contact the organizers of workshops in
which they would like to participate. Email addresses for workshop
organizers are included below.

The organizers would also like to encourage proposals for additional
workshops. If you would like to organize and chair a workshop, please 
submit a one-paragraph proposal, including a description of the workshop's
topic, and some idea of how the workshop will be organized. 

Workshop proposals will be accepted by email only at
icga93@pele.cs.unm.edu 

At the ICGA91 (in San Diego), the workshops served an important role,
providing smaller, less formal meetings for the discussion of specific topics
related to genetic algorithms research. The organizers hope that this
tradition will continue at ICGA93.


ICGA93 workshops (if you wish to partipate, please write directly to the
workshop's organizer):
------------------------------------------------------------------------

Genetic Programming 
	Organizer: Kim Kinnear (kim.kinnear@sun.com)

Engineering Applications of GAs (structural shape and topology optimization)
	Organizer: Mark Jakiela (jakiela@MIT.EDU)

Discovery of long-action chains and emergence of hierarchies in classifier
systems
	Organizers: Alex Shevorshkon
		    Erhard Bruderer (Erhard.Bruderer@um.cc.umich.edu)

Niching Methods
	Organizer: Alan Schultz (schultz@aic.nrl.navy.mil)
		   Sam Mahfoud (mahfoud@gal4.ge.uiuc.edu)

Combinations of GAs and Neural Nets (COGANN)
	Organizer: J. David Schaffer (ds1@philabs.Philips.Com)

GAs in control systems
	Organizer: Terry Fogarty (tc_fogar@pat.uwe-bristol.ac.uk)










Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id ac18299; 9 Jun 93 18:39:17 EDT
Received: from Q.CS.CMU.EDU by Q.CS.CMU.EDU id ac02599; 9 Jun 93 14:22:49 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa02592;
          9 Jun 93 13:51:05 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa00650;
          9 Jun 93 13:50:11 EDT
Received: from SEF1.SLISP.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa15564;
          9 Jun 93 13:47:22 EDT
Received: from SEF1.SLISP.CS.CMU.EDU by SEF1.SLISP.CS.CMU.EDU id aa00747;
          9 Jun 93 13:46:57 EDT
To: connectionists@cs.cmu.edu
Subject: Quick survey: Cascor and Quickprop
Date: Wed, 09 Jun 93 13:46:48 -0400
From: Scott_Fahlman@SEF1.SLISP.CS.CMU.EDU


Distributing code by anonymous FTP is convenient for everyone with decent
internet connections, but it has the disadvantage that it is hard to keep
track of who is using the code.  Every so often we need to justify our
existence to someone and need to show them that there are a non-trivial
number of real users out there.

If you are now using, or have recently used, any of my neural net
algorithms or programs (Quickprop, Cascade-Correlation, Recurrent
Cascade-Correlation), I would very much appreciate it if you would send me
a quick E-mail message with your name, organization, and (if it's not a
secret) just a few words about what you are doing with it.  (For example:
"classifying textures in satellite photos".)

For those of you who don't know about the availability of this code (and
related papers), I enclose below some instructions on how to get these
things by anonymous FTP.

Thanks,
Scott

===========================================================================
Scott E. Fahlman			Internet:  sef+@cs.cmu.edu
Senior Research Scientist		Phone:     412 268-2575
School of Computer Science              Fax:       412 681-5739
Carnegie Mellon University		Latitude:  40:26:33 N
5000 Forbes Avenue			Longitude: 79:56:48 W
Pittsburgh, PA 15213
===========================================================================

Public-domain simulation programs for the Quickprop, Cascade-Correlation,
and Recurrent Cascade-Correlation learning algorithms are available via
anonymous FTP on the Internet.  This code is distributed without charge on
an "as is" basis.  There is no warranty of any kind by the authors or by
Carnegie-Mellon University.

Instructions for obtaining the code via FTP are included below.  If you
can't get it by FTP, contact me by E-mail (sef+@cs.cmu.edu) and I'll try
*once* to mail it to you.  Specify whether you want the C or Lisp version.
If it bounces or your mailer rejects such a large message, I don't have
time to try a lot of other delivery methods.

HOW TO GET IT:

For people (at CMU, MIT, and soon some other places) with access to the
Andrew File System (AFS), you can access the files directly from directory
"/afs/cs.cmu.edu/project/connect/code".  This file system uses the same
syntactic conventions as BSD Unix: case sensitive names, slashes for
subdirectories, no version numbers, etc.  The protection scheme is a bit
different, but that shouldn't matter to people just trying to read these
files.

For people accessing these files via FTP:

1. Create an FTP connection from wherever you are to machine
"ftp.cs.cmu.edu".  The internet address of this machine is 128.2.206.173,
for those who need it.

2. Log in as user "anonymous" with your own ID as password.  You may see an
error message that says "filenames may not have /.. in them" or something
like that.  Just ignore it.

3. Change remote directory to "/afs/cs/project/connect/code".  NOTE: You
must do this in a single operation.  Some of the super directories on this
path are protected against outside users.

4. At this point FTP should be able to get a listing of files in this
directory with DIR and fetch the ones you want with GET.  (The exact FTP
commands you use depend on your local FTP server.)

Partial contents:

quickprop1.lisp		Original Common Lisp version of Quickprop.
quickprop1.c		C version by Terry Regier, U. Cal. Berkeley.
backprop.lisp		Overlay for quickprop1.lisp.  Turns it into backprop.
cascor1.lisp		Original Common Lisp version of Cascade-Correlation.
cascor1.c		C version by Scott Crowder, Carnegie Mellon
rcc1.lisp		Common Lisp version of Recurrent Cascade-Correlation.
rcc1.c			C version, trans. by Conor Doherty, Univ. Coll. Dublin
nevprop1.15.shar	Better quickprop implementation in C from U. of Nevada.
---------------------------------------------------------------------------
Tech reports describing these algorithms can also be obtained via FTP.
These are Postscript files, processed with the Unix compress/uncompress
program.

unix> ftp ftp.cs.cmu.edu (or 128.2.206.173)
Name: anonymous
Password: <your user id>
ftp> cd /afs/cs/project/connect/tr
ftp> binary
ftp> get filename.ps.Z
ftp> quit
unix> uncompress filename.ps.Z
unix> lpr filename.ps   (or however you print postscript files)

For "filename", sustitute the following:

qp-tr			Paper on Quickprop and other backprop speedups.
cascor-tr		Cascade-Correlation paper.
rcc-tr			Recurrent Cascade-Correlation paper.
precision		Hoehfeld-Fahlman paper on Cascade-Correlation with
			limited numerical precision.
---------------------------------------------------------------------------
The following are the published conference and journal versions of the
above (in some cases shortened and revised):

Scott E. Fahlman (1988) "Faster-Learning Variations on Back-Propagation: An
Empirical Study" in (\it Proceedings, 1988 Connectionist Models Summer
School}, D. S. Touretzky, G. E. Hinton, and T. J. Sejnowski (eds.),
Morgan Kaufmann Publishers, Los Altos CA, pp. 38-51.

Scott E. Fahlman and Christian Lebiere (1990) "The Cascade-Correlation
Learning Architecture", in {\it Advances in Neural Information Processing
Systems 2}, D. S. Touretzky (ed.), Morgan Kaufmann Publishers, Los Altos
CA, pp. 524-532.

Scott E. Fahlman (1991) "The Recurrent Cascade-Correlation Architecture" in
{\it Advances in Neural Information Processing Systems 3}, R. P. Lippmann,
J. E. Moody, and D. S. Touretzky (eds.), Morgan Kaufmann Pulishers, Los
Altos CA, pp. 190-196.

Marcus Hoehfeld and Scott E. Fahlman (1992) "Learning with Limited
Numerical Precision Using the Cascade-Correlation Learning Algorithm" in
IEEE Transactions on Neural Networks, Vol. 3, no. 4, July 1992, pp.
602-611.


Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa21850; 10 Jun 93 7:43:43 EDT
Received: from Q.CS.CMU.EDU by Q.CS.CMU.EDU id ab02599; 9 Jun 93 14:17:15 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa02482;
          9 Jun 93 13:24:48 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa00561;
          9 Jun 93 13:23:00 EDT
Received: from MAILBOX.SRV.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa12988;
          9 Jun 93 9:22:56 EDT
Received: from siemens.siemens.com by MAILBOX.SRV.CS.CMU.EDU id aa17576;
          9 Jun 93 9:22:00 EDT
Received: from learning.siemens.com by siemens.siemens.com with smtp
	(Smail3.1.28.1 #22) id m0o3Q5x-0019GnC; Wed, 9 Jun 93 09:21 EDT
Received: from tractatus.siemens.com.siemens.com by learning.siemens.com (4.1/SMI-4.1)
	id AA23658; Wed, 9 Jun 93 09:21:36 EDT
Received: by tractatus.siemens.com.siemens.com 
        (4.1//ident-1.0) id AA14087; Wed, 9 Jun 93 09:21:35 EDT 
Received: from Messages.8.5.N.CUILIB.3.45.SNAP.NOT.LINKED.tractatus.siemens.com.sun4.41
          via MS.5.6.tractatus.siemens.com.sun4_41;
          Wed,  9 Jun 1993 09:21:35 -0400 (EDT)
Message-Id: <0g5SDTG1GEMnEpEfFi@tractatus.siemens.com>
Date: Wed,  9 Jun 1993 09:21:35 -0400 (EDT)
From: "Steve Hanson,(U,6500,,p)" <jose@learning.siemens.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
To: Connectionists@MAILBOX.SRV.CS.CMU.EDU
Subject: NIPS5 Oversight

NIPS-5 attendees:

Due to an oversight we regret the inadvertent
exclusion of 3 papers from the recent NIPS-5
volume.  

These papers were:

Mark Plutowski, Garrison Cottrell and Halbert White: Learning
Mackey-Glass from 25 examples, Plus or Minus 2

Yehuda Salu: Classification of Multi-Spectral Pixels by the Binary
Diamond Neural Network

A. C. Tsoi, D.S.C. So and A. Sergejew: Classification of
Electroencephalograms using Artificial Neural Networks


We are writing this note to (1) acknowledge our error (2) point out
where you can obtain a present copy of the author's papers and  (3)
inform you that they will appear in their existing form or an updated
form in NIPS Vol. 6.

Presently, Morgan Kaufmann will be sending a
bundle of the 3 formatted papers to all NIPS-5 attendees, these will be
marked as NIPS-5 Addendum.  You should also be able to retrieve
an official copy from NEUROPROSE archive.

Again, we apologize for the oversight to the
authors.

Stephen J. Hanson, General Chair
Jack Cowan, Program Chair
C. Lee Giles, Publications Chair


Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa08610;
          11 Jun 93 19:38:57 EDT
Received: from Q.CS.CMU.EDU by q.cs.CMU.EDU id aa08660; 11 Jun 93 17:19:39 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa08658;
          11 Jun 93 16:53:34 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa07876;
          11 Jun 93 16:52:23 EDT
Received: from EDRC.CMU.EDU by B.GP.CS.CMU.EDU id aa05352;
          11 Jun 93 13:45:41 EDT
Received: from bunny.gte.com by EDRC.CMU.EDU id aa21398; 11 Jun 93 13:45:21 EDT
Received: from wombat by bunny.gte.com (5.61/GTEL2.19)
	id AA08858; Fri, 11 Jun 93 13:44:08 -0400
Date: Fri, 11 Jun 93 13:44:08 -0400
Message-Id: <9306111744.AA08858@bunny.gte.com>
To: connectionists@cs.cmu.edu, rsutton@gte.com, swhitehead@gte.com, 
    klopfah%avlab.dnet@aa.wpafb.af.mil, barto@cs.umass.edu, awm@ai.mit.edu, 
    singh@cs.umass.edu, jordan@psyche.mit.edu, lonnie.chrisman@cs.cmu.edu, 
    michael.littman@cs.cmu.edu, sridhar@watson.ibm.com, 
    nilsson@cs.stanford.edu, Yee@cs.umass.edu, 
    bairdlc%avlab.dnet@aaunix.aa.wpafb.af.mil, t_miller@unhh.unh.edu, 
    lpk@cs.brown.edu, swhitehead@gte.com, read@helmholtz.sdsc.edu, 
    dayan@helmholtz.sdsc.edu, rjw@ccs.northeastern.edu, 
    ljl@learning.siemens.com, vijay@cs.umass.edu, marcos@dist.dist.unige.it, 
    wilson@smith.rowland.org, cisl218@gte.com, cisl217@gte.com, dbs0@gte.com, 
    lservi@gte.com, mweintraub@gte.com, jmg4@gte.com, amit@gte.com, 
    abonde@gte.com, igc0@gte.com, bhurwitz@gte.com, gduncan@gte.com
From: Rich Sutton <sutton@gte.com>
X-Sender: rich@bunny.gte.com
Subject: Reinforcement Learning Workshop - Call for Participation



                       LAST CALL FOR PARTICIPATION

           "REINFORCEMENT LEARNING: What We Know, What We Need"

   an Informal Workshop to follow ML93 (10th Int. Conf. on Machine Learning) 
           June 30 & July 1, University of Massachusetts, Amherst

Reinforcement learning is a simple way of framing the problem of an
autonomous agent learning and interacting with the world to achieve a goal.
This has been an active area of machine learning research for the last 5
years. The objective of this workshop is to present concisely the current
state of the art in reinforcement learning and to identify and highlight
critical open problems.

The intended audience is all learning researchers interested in reinforcement
learning. The first half of the workshop will be mainly tutorial while the
second half will define and explore open problems. The entire workshop will
last approximately one and three-quarters days. It is possible to register
for the workshop but not the conference, but attending the conference is
highly recommended as many new RL results will be presented in the
conference and these will not be repeated in the workshop. Registration
information is given at the end of this message.

Program Committee: Rich Sutton (chair), Nils Nilsson, Leslie Kaelbling,
Satinder Singh, Sridhar Mahadevan, Andy Barto, Steve Whitehead

............................................................................

                           PROGRAM INFORMATION

The following draft program is divided into "sessions", each consisting of a
set of presentations on a single topic. The earlier sessions are more "What
we know" and the later sessions are more "What we Need", although some of
each will be covered in all sessions. Sessions last 60-120 minutes and are
separated by 30 minute breaks. Each session has an organizer and a series of
speakers, one of which is likely to be the organizer herself. In most cases
the speakers are meant to cover a body of work, not just their own, as a
survey directed at identifying and explaining the key issues and open
problems. The organizer works with the speakers to assure this (the organizer
also has primary responsibility for picking the speakers, and chairs the
session). 

*****************************************************************************
PRELIMINARY SCHEDULE:

June 30:

 9:00--10:30    Session 1: Defining Features of RL
10:30--11:00    Break
11:00--12:30    Session 2: RL and Dynamic Programming
12:30--2:00     Lunch
 2:00--3:30     Session 3: Theory: Stochastic Approximation and Convergence
 3:30--4:00     Break
 4:00--5:00     Session 4: Hidden State and Short-Term Memory

July 1:

 9:00--11:00    Session 5: Structural Generalization: Scaling RL to Large
State Spaces
11:00--11:30    Break
11:30--12:30    Session 6: Hierarchy and Abstraction
12:30--1:30     Lunch
 1:30--2:30     Session 7: Strategies for Exploration
 2:30--3:30     Session 8: Relationships to Neuroscience and Evolution

*****************************************************************************
PRELIMINARY PROGRAM

---------------------------------------------------------------------------
Session 1: Defining Features of Reinforcement Learning
Organizer: Rich Sutton, rich@gte.com

"Welcome and Announcements" by Rich Sutton, GTE (10 minutes)
"History of RL" by Harry Klopf, WPAFB (25 minutes)
"Delayed Reward: TD Learning and TD-Gammon" by Rich Sutton, GTE (50 minutes)

The intent of the first two talks is to start getting across certain key
ideas about reinforcement learning: 1) RL is a problem, not a class of
algorithms, 2) the distinguishing features of the RL problem are
trial-and-error search and delayed reward. The third talk is a tutorial
presentation of temporal-difference learning, the basis of learning methods
for handling delayed reward. This talk will also present Gerry Tesauro's
TD-Gammon, a TD-learning system that learned to play backgammon at a
grandmaster level. (There is still an outside chance that Tesauro will be able
to attend the workshop and present TD-Gammon himself.)
---------------------------------------------------------------------------
Session 2: RL and Dynamic Programming 
Organizer: Andy Barto, barto@cs.umass.edu

"Q-learning" by Chris Watkins, Morning Side Inc (30 minutes)
"RL and Planning" by Andrew Moore, MIT (30 minutes)
"Asynchronous Dynamic Programming" by Andy Barto, UMass (30 minutes)

These talks will cover the basic ideas of RL and its relationship to dynamic
programming and planning.  Including Markov Decision Tasks.
---------------------------------------------------------------------------
Session 3: New Results in RL and Asynchronous DP
Organizer: Satinder Singh, singh@cs.umass.edu

"Introduction, Notation, and Theme" by Satinder P. Singh, UMass
"Stochastic Approximation: Convergence Results" by T Jaakkola & M Jordan, MIT
"Asychronous Policy Iteration" by Ron Williams, Northeastern
"Convergence Proof of Adaptive Asynchronous DP" by Vijaykumar Gullapalli, UMass
"Discussion of *some* Future Directions for Theoretical Work" by ?

This session consists of two parts. In the first part we present a new and
fairly complete theory of (asymptotic) convergence for reinforcement learning
(with lookup tables as function approximators). This theory explains RL
algorithms as replacing the full-backup operator of classical dynamic
programming algorithms by a random backup operator that is unbiased. We
present an extension to classical stochastic approximation theory (e.g.,
Dvoretzky's) to derive probability one convergence proofs for Q-learning,
TD(0), and TD(lambda), that are different, and perhaps simpler, than
previously available proofs. We will also use the stochastic approximation
framework to highlight the contribution made by reinforcement learning
algorithms such as TD, and Q-learning, to the entire class of iterative
methods for solving the Bellman equations associated with Markovian Decision
Tasks. 
          The second part deals with contributions by RL researchers to
asynchronous DP.  Williams will present a set of algorithms (and convergence
results) that are asynchronous at a finer grain than classical asynchronous
value iteration, but still use "full" backup operators. These algorithms are
related to the modified policy iteration algorithm of Puterman and Shin, as
well as to the ACE/ASE (actor-critic) architecture of Barto, Sutton and
Anderson. Subsequently, Gullapalli will present a proof of convergence for
"adaptive" asynchronous value iteration that shows that in order to ensure
convergence with probability one, one has to place constraints on how many
model-building steps have to be be performed between two consecutive updates
of the value function.
        Lastly we will discuss some pressing theoretical questions
regarding rate of convergence for reinforcement learning algorithms.
---------------------------------------------------------------------------
Session 4: Hidden State and Short-Term Memory
Organizer: Lonnie Chrisman, lonnie.chrisman@cs.cmu.edu
Speakers: Lonnie Chrisman & Michael Littman, CMU

Many realistic agents cannot directly observe every relevant aspect of their
environment at every moment in time. Such hidden state causes problems for
many reinforcement learning algorithms, often causing temporal differencing
methods to become unstable and making policies that simply map sensory input
to action insufficient.
        
In this session we will examine the problems of hidden state and of learning
how to best organize short-term memory. I will review and compare existing
approaches such as those of Whitehead & Ballard, Chrisman, Lin & Mitchell,
McCallum, and Ring. I will also give a tutorial on the theories of Partially
Observable Markovian Decision Processes, Hidden Markov Models, and related
learning algorithms such as Balm-Welsh/EM as they are relevant to
reinforcement learning.

Note: Andrew McCallum will present a paper on this topic as part of the
conference; that material will not be repeated in the workshop.
---------------------------------------------------------------------------
Session 5: Structural Generalization: Scaling RL to Large State Spaces
Organizer: Sridhar Mahadevan, sridhar@watson.ibm.com

"Motivation and Introduction" by Sridhar Mahadevan, IBM
"Neural Nets" by Long-Ji Lin, Siemens
"CMAC" by Tom Miller, Univ. New Hampshire
"Kd-trees and CART" by Marcos Salganicoff, UPenn
"Learning Teleo-Reactive Trees" by Nils Nilsson, Stanford
"Function Approximation in RL: Issues and Approaches" by Richard Yee, UMass
"RL with Analog State and Action Vectors", Leemon Baird, WPAFB

RL is slow to converge in tasks with high-dimensional continuous state
spaces, particularly given sparse rewards. One fundamental issue in
scaling RL to such tasks is structural credit assignment, which deals
with inferring rewards in novel situations.  This problem can be
viewed as a supervised learning task, the goal being to learn a
function from instances of states, actions, and rewards. Of course,
the function cannot be stored exhaustively as a table, and the
challenge is devise more compact storage methods.  In this session we
will discuss some of the different approaches to the structural
generalization problem.

Note: Steve Whitehead & Rich Sutton will present a paper on this topic as
part of the confernece; that material will not be repeated in the workshop.
---------------------------------------------------------------------------
Session 6: Hierarchy and Abstraction 
Organizer: Leslie Kaelbling, lpk@cs.brown.edu
Speakers: To be determined

Too much of RL is concerned with low-level actions and low-level (single time
step) models. How can we model the world, and plan about actions, at a higher
level, or over longer time scales? How can we integrate models and actions at
different time scales and levels of abstraction? To address these questions,
several researchers have proposed models of hierarchical learning and
planning, e.g., Satinder Singh, Mark Ring, Chris Watkins, Long-ji Lin, Leslie
Kaelbling, and Peter Dayan & Geoff Hinton. The format for this session will
be a brief introduction to the problem by the session organizer followed by
short talks and discussion. Speakers have not yet been determined.

Note: Kaelbling will also speak on this topic as part of the conference; that
material will not be repeated in the workshop.
-----------------------------------------------------------------------------
Session 7: Strategies for Exploration
Organizer: Steve Whitehead, swhitehead@gte.com

Exploration is essential to reinforcement learning, since it is through
exploration, that an agent learns about its environment. Naive exploration
can easily result in intractably slow learning. On the other hand,
exploration strategies that are carefully structured or exploit external
sources of bias can do much better.

A variety of approaches to exploration have been devised over the last few
years (e.g., Kaelbling, Sutton, Thrun, Koenig, Lin, Clouse, Whitehead). The
goal of this session is to review these techniques, understand their
similarities and differences, understand when and why they work, determine
their impact on learning time, and to the extent possible organize them
taxonomically.

The session will consist of a short introduction by the session organizer
followed by a open discussion. The discussion will be informal but aimed at
issues raised during the monologue. An informal panel of researchers will be
on hand to participate in the discussion and answer questions about their
work in this area.
-----------------------------------------------------------------------------
Session 8: Relationships to Neuroscience and Evolution
Organizer: Rich Sutton, rich@gte.com

We close the workshop with a reminder of RL's links to neuroscience and to
Genetic Algorithms / Classifier Systems:

"RL in the Brain: Developing Connections Through Prediction" by R Montague, Salk
"RL and Genetic Classifier Systems" by Stewart Wilson, Roland Institute

Abstract of first talk:
Both vertebrates and invertebrate possess diffusely projecting
neuromodulatory systems. In the vertebrate, it is known that these systems
are involved in the development of cerebral cortical structures and can
deliver reward and/or salience signals to the cerebral cortex and other
structures to influence learning in the adult. Recent data in primates
suggest that this latter influence obtains because changes in firing in
nuclei that deliver the neuromodulators reflect the difference in the
predicted and actual reward, i.e., a prediction error. This relationship is
qualitatively similar to that predicted by Sutton and Barto's classical
conditioning theory. These systems innervate large expanses of cortical and
subcortical turf through extensive axonal projections that originate in
midbrain and basal forebrain nuclei and deliver such compounds as dopamine,
serotonin, norepinephrine, and acetylcholine to their targets. The small
number of neurons comprising these subcortical nuclei relative to the extent
of the territory their axons innervate suggests that the nuclei are reporting
scalar signals to their target structures. These facts are synthesized into a
single framework which relates the development of brain structures and
conditioning in adult brains. We postulate a modification to Hebbian accounts
of self-organization: Hebbian learning is conditional on a incorrect
prediction of future delivered reinforcement from a diffuse neuromodulatory
system. The reinforcement signal is derived both from externally driven
contingencies such as proprioception from eye movements as well as from
internal pathways leading from cortical areas to subcortical nuclei. We
suggest a specific model for how such predictions are made in the vertebrate
and invertebrate brain. We illustrate the framework with examples ranging
from the development of sensory and sensory-motor maps to foraging behavior
in bumble-bees.

******************************************************************************
GENERAL INFO ON REGISTERING FOR ML93 AND WORKSHOPS:


        Tenth International Conference on Machine Learning (ML93)
        ---------------------------------------------------------

The conference will be held at the University of Massachusetts in Amherst,
Massachusetts, from June 27 (Sunday) through June 29 (Tuesday).  The con-
ference will feature four invited talks and forty-six paper presentations.
The invited speakers are Leo Breiman (U.C. Berkeley, Statistics), Micki Chi
(U. Pittsburgh, Psychology), Michael Lloyd-Hart (U. Arizona, Adaptive Optics
Group of Steward Observatory), and Pat Langley (Siemens, Machine Learning). 
Following the conference, there will be three informal workshops:

  Workshop #A:
    Reinforcement Learning: What We Know, What We Need (June 30 - July 1)
    Organizers: R. Sutton (chair), N. Nilsson, L. Kaelbling, S. Singh,
                S. Mahadevan, A. Barto, S. Whitehead

  Workshop #B:
    Fielded Applications of Machine Learning (June 30 - July 1)
    Organizers: P. Langley, Y. Kodratoff

  Workshop #C:
    Knowledge Compilation and Speedup Learning (June 30)
    Organizers: D. Subramanian, D. Fisher, P. Tadepalli

Options and fees:

Conference registration fee                     $140    regular
                                                $110    student
Breakfast/lunch meal plan (June 27-29)           $33
Dormitory housing (nights of June 26-28)         $63    single occupancy
                                                 $51    double occupancy
Workshop A (June 30-July 1)                      $40
Workshop B (June 30-July 1)                      $40
Breakfast/lunch meal plan (June 30-July 1)       $22
Dormitory housing (nights of June 29-30)         $42    single occupancy
                                                 $34    double occupancy
Workshop C (June 30)                             $20
Breakfast/lunch meal plan (June 30)              $11
Dormitory housing (night of June 29)             $21    single occupancy
                                                 $17    double occupancy
Administrative fee (required)                    $10
Late fee (received after May 10)                 $30

To obtain a FAX of the registration form, send an email request to Paul Utgoff
ml93@cs.umass.edu or utgoff@cs.umass.edu


Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id ac09463;
          11 Jun 93 23:50:33 EDT
Received: from Q.CS.CMU.EDU by q.cs.CMU.EDU id aa08890; 11 Jun 93 19:12:57 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa08888;
          11 Jun 93 18:55:50 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa08275;
          11 Jun 93 18:55:10 EDT
Received: from MAILBOX.SRV.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa08352;
          11 Jun 93 18:53:49 EDT
Received: from odin.ucsd.edu by MAILBOX.SRV.CS.CMU.EDU id aa22443;
          11 Jun 93 18:53:40 EDT
Received: by odin.ucsd.edu; id AA06151
	sendmail 5.67/UCSDPSEUDO.4-CS
	Fri, 11 Jun 93 15:53:33 -0700 for Connectionists@MAILBOX.SRV.CS.CMU.EDU
Date: Fri, 11 Jun 93 15:53:33 -0700
From: Gary Cottrell <gary@cs.ucsd.edu>
Message-Id: <9306112253.AA06151@odin.ucsd.edu>
To: Connectionists@MAILBOX.SRV.CS.CMU.EDU, jose@learning.siemens.com
Subject: Re:  NIPS5 Oversight

FYI, to retrieve

Plutowski, Cottrell and White:
Learning Mackey-Glass from 25 examples, Plus or Minus 2

The file on neuroprose is:

pluto.nips92.ps.Z

A script file is attached at the end of this note.

Gary Cottrell 619-534-6640 Reception: 619-534-6005 FAX: 619-534-7029
Computer Science and Engineering 0114
University of California San Diego 
La Jolla, Ca. 92093
gary@cs.ucsd.edu (INTERNET)
gcottrell@ucsd.edu (BITNET, almost anything)
..!uunet!ucsd!gcottrell (UUCP)

RE:
From: "Steve Hanson" <jose@learning.siemens.com>
To: Connectionists@MAILBOX.SRV.CS.CMU.EDU
Subject: NIPS5 Oversight

NIPS-5 attendees:

Due to an oversight we regret the inadvertent
exclusion of 3 papers from the recent NIPS-5
volume.  

These papers were:

Mark Plutowski, Garrison Cottrell and Halbert White: Learning
Mackey-Glass from 25 examples, Plus or Minus 2

Yehuda Salu: Classification of Multi-Spectral Pixels by the Binary
Diamond Neural Network

A. C. Tsoi, D.S.C. So and A. Sergejew: Classification of
Electroencephalograms using Artificial Neural Networks


We are writing this note to (1) acknowledge our error (2) point out
where you can obtain a present copy of the author's papers and  (3)
inform you that they will appear in their existing form or an updated
form in NIPS Vol. 6.

Presently, Morgan Kaufmann will be sending a
bundle of the 3 formatted papers to all NIPS-5 attendees, these will be
marked as NIPS-5 Addendum.  You should also be able to retrieve
an official copy from NEUROPROSE archive.

Again, we apologize for the oversight to the
authors.

Stephen J. Hanson, General Chair
Jack Cowan, Program Chair
C. Lee Giles, Publications Chair


#!/bin/sh
########################################################################
# usage: ohio <FILENAME> <PRINTERFLAGS>
#
# A Script to get, uncompress, and print postscript
# files from the neuroprose directory on cheops.ohio-state.edu
#
# By Tony Plate & Jordan Pollack
########################################################################

if [ "$1" = "" ] ; then
  echo usage: $0 "<filename> <printerflags>"
  echo
  echo The filename must be exactly as it is in the archive, if your
  echo file is not found the first time, look in the file \"ftp.log\" 
  echo for a list of files in the archive.
  echo
  echo The printerflags are used for the optional lpr command that
  echo is executed after the file is retrieved. A common use would
  echo be to use -P to specify a particular postscript printer.
  exit
fi

########################################################################
#  set up script for ftp
########################################################################
cat > .ftp.script <<END
user anonymous neuron
binary
cd pub/neuroprose
ls
get $1 /tmp/$1
quit
END

########################################################################
# Run and delete the script generating a ftp.log file and error
########################################################################
echo Trying ftp, please wait, could take several minutes ...
ftp -n archive.cis.ohio-state.edu < .ftp.script > ftp.log
rm -f .ftp.script
if [ ! -f /tmp/$1 ] ; then
  echo Failed to get file - please inspect ftp.log for list of available files
  exit
fi

########################################################################
# Uncompress if necessary
########################################################################
echo Retrieved /tmp/$1
case $1 in
  *.Z)
  echo Uncompressing /tmp/$1
  uncompress /tmp/$1
  FILE=`basename $1 .Z`
  ;;
  *)
  FILE=$1
esac

########################################################################
#  query to print file
########################################################################
echo -n "Send /tmp/$FILE to 'lpr $2' (y or n)? "
read x
case $x in
  [yY]*)
  echo Printing /tmp/$FILE
  lpr $2 /tmp/$FILE
  ;;
esac
echo File left in /tmp/$FILE



Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa09574;
          11 Jun 93 23:56:05 EDT
Received: from Q.CS.CMU.EDU by q.cs.CMU.EDU id aa08772; 11 Jun 93 17:43:17 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa08662;
          11 Jun 93 16:54:55 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa07884;
          11 Jun 93 16:53:05 EDT
Received: from MAILBOX.SRV.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa05528;
          11 Jun 93 14:03:05 EDT
Received: from wizard-gw.qualcomm.com by MAILBOX.SRV.CS.CMU.EDU id aa22054;
          11 Jun 93 14:02:25 EDT
Received: from harvey.qualcomm.com by qualcomm.com; id AA26489
	sendmail 5.65/QC-main-2.1 via SMTP
	Fri, 11 Jun 93 11:01:47 -0700 for Connectionists@MAILBOX.SRV.CS.CMU.EDU
Received: from gjacobs.qualcomm.com by harvey; id AA17745
	sendmail 5.67/QC-subsidiary-2.1 via SMTP
	Fri, 11 Jun 93 11:01:43 -0700 for nl-kr@cayuga.cs.rochester.edu
Message-Id: <9306111801.AA17745@harvey>
X-Sender: gjacobs@wizard.qualcomm.com
Date: Fri, 11 Jun 1993 11:00:40 -0700
To: power@globe.edrc.cmu.edu, connectionists@cs.cmu.edu, 
    NAFIPS-L%GSUVM1.BITNET@uga.cc.uga.edu, ee_faculty@ee.washington.edu, 
    eefaculty@seattleu.edu, TheoryNet@ibm.com, 
    Connectionists@MAILBOX.SRV.CS.CMU.EDU, epsynet@uhupvm1.bitnet, 
    nl-kr@cayuga.cs.rochester.edu
From: Gary Jacobs <gjacobs@qualcomm.com>
Subject: WCCI '94 Announcement and Call for Papers
X-Mailer: <PC Eudora Version 1.1a10>
X-Attachments: C:\EUDORA\WCCI.TXT;



Gary Jacobs
gjacobs@qualcomm.com
(619)597-5029 voice
(619)452-9096 fax



HARD FACT IN A WORLD OF FANTASY


A world of sheer fantasy awaits your arrival at the IEEE World Congress on 

Computational Intelligence next year; our host is Walt Disney World  in 

Orlando Florida.  Simultaneous Neural Network, Fuzzy Logic and 

Evolutionary Programming conferences will provide an unprecedented 

opportunity for technical development while the charms of the nearby Magic 

Kingdom and Epcot Center attempt to excite your fancies.



The role imagination has played in the development of Computational 

Intelligence techniques is well known; before they became "innovative" the 

various CI technologies were dismissed as "fantasies" of brilliant minds.



Now these tools are real; perhaps it's only appropriate that they should be 

further explored and their creators honored in a world of the imagination, a 

world where dreams come true.



Share your facts at Disney World; share your imagination.  Come to the IEEE 

World Congress on Computational Intelligence.



It's as new as tomorrow.

___________________________________________________________________________


                        ***CALL FOR PAPERS***
         ___________________________________________________
          IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE
         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
         * IEEE International Conference on Neural Networks *
                          * FUZZ/IEEE '94 *
     * IEEE International Symposium on Evolutionary Computation *

                        June 26 - July 2, 1994
      Walt Disney World Dolphin Hotel, Lake Buena Vista, Florida

            Sponsored by the IEEE Neural Networks Council
---------------------------------------------------------------------

           IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS

                   Steven K. Rogers, General Chair
                         rogers@afit.af.mil
Topics:
Applications, architectures, artificially intelligent neural networks,
artificial life, associative memory, computational intelligence,
cognitive science, embedology, filtering, fuzzy neural systems, hybrid
systems, image processing, implementations, intelligent control,
learning and memory, machine vision, motion analysis, neurobiology,
neurocognition, neurodynamics, optimization, pattern recognition,
prediction, robotics, sensation and perception, sensorimotor systems,
speech, hearing and language, system identification, supervised and
unsupervised learning, tactile sensors, and time series analysis.
             -------------------------------------------

                            FUZZ/IEEE '94

                  Piero P. Bonissone, General Chair
                       bonissone@crd.ge.ge.com
Topics:
Basic principles and foundations of fuzzy logic, relations between
fuzzy logic and other approximate reasoning methods, qualitative and
approximate-reasoning modeling, hardware implementations of fuzzy-
logic algorithms, design, analysis, and synthesis of fuzzy-logic
controllers, learning and acquisition of approximate models, relations
between fuzzy logic and neural networks, integration of fuzzy logic
and neural networks, integration of fuzzy logic and evolutionary
computing, and applications.
             -------------------------------------------

             IEEE CONFERENCE ON EVOLUTIONARY COMPUTATION

                 Zbigniew Michalewicz, General Chair
                        zbyszek@mosaic.uncc.edu
Topics:
Theory of evolutionary computation, evolutionary computation
applications, efficiency and robustness comparisons with other direct
search algorithms, parallel computer applications, new ideas
incorporating further evolutionary principles, artificial life,
evolutionary algorithms for computational intelligence, comparisons
between different variants of evolutionary algorithms, machine
learning applications, evolutionary computation for neural networks,
and fuzzy logic in evolutionary algorithms.

---------------------------------------------------------------------

               INSTRUCTIONS FOR ALL THREE CONFERENCES

Papers must be received by December 10, 1993.  Papers will be reviewed
by senior researchers in the field, and all authors will be informed
of the decisions at the end of the review proces.  All accepted papers
will be published in the Conference Proceedings.  Six copies (one
original and five copies) of the paper must be submitted.  Original
must be camera ready, on 8.5x11-inch white paper, one-column format in
Times or similar fontstyle, 10 points or larger with one-inch margins
on all four sides.  Do not fold or staple the original camera-ready
copy.  Four pages are encouraged.  The paper must not exceed six pages
including figures, tables, and references, and should be written in
English.  Centered at the top of the first page should be the complete
title, author name(s), affiliation(s) and mailing address(es).  In the
accompanying letter, the following information must be included: 1)
Full title of paper, 2) Corresponding authors name, address, telephone
and fax numbers, 3) First and second choices of technical session, 4)
Preference for oral or poster presentation, and 5) Presenter's name,
address, telephone and fax numbers.  Mail papers to (and/or obtain
further information from): World Congress on Computational
Intelligence, Meeting Management, 5665 Oberlin Drive, #110, San Diego,
California 92121, USA (email: 70750.345@compuserve.com, telephone:
619-453-6222).




Return-Path: <ml-connectionists-request@Q.CS.CMU.EDU>
Received: from Q.CS.CMU.EDU by B.GP.CS.CMU.EDU id aa13977;
          14 Jun 93 18:18:13 EDT
Received: from Q.CS.CMU.EDU by Q.CS.CMU.EDU id aa15537; 14 Jun 93 14:19:42 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by Q.CS.CMU.EDU id aa15535;
          14 Jun 93 13:52:47 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU id aa17569;
          14 Jun 93 13:51:50 EDT
Received: from CS.CMU.EDU by B.GP.CS.CMU.EDU id aa09348; 14 Jun 93 11:07:06 EDT
Received: from moose.cs.indiana.edu by CS.CMU.EDU id aa09460;
          14 Jun 93 11:06:50 EDT
Received: by moose.cs.indiana.edu
	(5.65c/9.4jsm) id AA25920; Mon, 14 Jun 1993 10:06:46 -0500
Date: Mon, 14 Jun 1993 10:06:46 -0500
From: Michael Gasser <gasser@cs.indiana.edu>
To: connectionists@cs.cmu.edu
Subject: TR on language acquisition

FTP-host: cs.indiana.edu (129.79.254.191)
FTP-filename: /pub/techreports/TR384.ps.Z

The following paper is available in compressed postscript form by
anonymous ftp from the Indiana University Computer Science Department
ftp archive (see above).  The paper is 60 pages long.  Hardcopies
won't be available till September, I'm afraid.

Comments welcome.

Michael Gasser
gasser@cs.indiana.edu

=================================================================
		       Learning Words in Time:
	       Towards a Modular Connectionist Account
	      of the Acquisition of Receptive Morphology

			    Michael Gasser
	     Computer Science and Linguistics Departments
			  Indiana University

   To have learned the morphology of a natural language is to have the
capacity  both to  recognize and to produce  words consisting of novel
combinations   of  familiar  morphemes.   Most   recent  work  on  the
acquisition of morphology takes the perspective of  production, but it
is receptive  morphology which  comes  first in the child.  This paper
presents  a connectionist model of the  acquisition of the capacity to
recognize morphologically complex words.  The model takes sequences of
phonetic  segments  as  inputs  and  maps   them   onto  output  units
representing  the meanings of  lexical  and grammatical morphemes.  It
consists  of  a simple recurrent  network  with  separate hidden-layer
modules for  the tasks of recognizing  the root  and  the  grammatical
morphemes of the  input  word.  Experiments  with artificial  language
stimuli demonstrate  that the model generalizes  to  novel  words  for
morphological rules of all but one of the major types found in natural
languages  and  that  a  version   of  the  network   with  unassigned
hidden-layer   modules  can  learn  to  assign  them   to  the  output
recognition tasks in an efficient manner.  I also argue that for rules
involving reduplication, that is, the copying of portions  of  a root,
the  network requires separate recurrent subnetworks for  sequences of
larger units such as syllables.  The network can learn to develop  its
own syllable representations which not only support the recognition of
reduplication but also provide the basis for  learning to produce,  as
well as  recognize,  morphologically  complex words.  The  model makes
many detailed predictions about  the learning difficulty of particular
morphological rules.



