From: Ian Jackson Date: Wed, 25 Nov 2009 18:15:09 +0000 (+0000) Subject: Commit 2.4.5-5 as unpacked X-Git-Tag: orig.unpacked X-Git-Url: http://www.chiark.greenend.org.uk/ucgi/~ian/git?p=inn-innduct.git;a=commitdiff_plain;h=d5b3cbfbd8f26b8b77ce3ce100a9c13c5a71c8f3 Commit 2.4.5-5 as unpacked --- d5b3cbfbd8f26b8b77ce3ce100a9c13c5a71c8f3 diff --git a/CONTRIBUTORS b/CONTRIBUTORS new file mode 100644 index 0000000..504329f --- /dev/null +++ b/CONTRIBUTORS @@ -0,0 +1,261 @@ +The following is a list of the people (in roughly chronological order) +who've helped out. If anyone's name has been left out (probably), or if +something has been incorrectly attributed to you (ditto), please let us +know. + +Rich Salz: + Designed and wrote most of it. + +Bob Halley: + Did the TCL extension. + +Christophe Wolfhugel: + Did the Perl extension and provided several other fixes. + +Doug Needham: + Made nnrpd spool if innd is unavailable. Made nnrpd handle the + LIST SUBSCRIPTIONS command. Added the rebuilding of control + connections to innd (SIGUSR1). Got inews to ask the nntp peer for + moderator info instead of digging it out of a local file. + +David Lawrence: + Did the hooks for PGP verificiation of control messages, added + actived support for syncing against an active file obtained via + ftp. + +John Stapleton: + Wrote the poison newsgroup code ('@') for newsfeeds(5). Wrote the + too-many-connects support ('-X -H -T' flags to innd). + +Landon Curt Noll: + Wrote or co-wrote actsync, nntpsend, shrinkfile, innstat, + news.daily, tally.control and various man pages. He also was the + person originally behind the site directory + configuration/installation process. + +John Levine: + Wrote the '-e' support for expire (expire on shortest time). + +Matthias Urlichs: + Made rnews recognise gzip compression. Made newsfeeds(5) take the + 'Wp' flag. + +Stefan Petri: + Did the original XBATCH support + +Russel Street: + Did more XBATCH support. + +Alan Barrett: + Did the work-limiter in the select loop to stop streaming from + killing performance. + +Greg Patten: + Wrote the perl innlog. + +Clayton O'Neill: + Wrote the articles storage API and implemented the timehash + and regular storage mechanisms with it. He made significant + modifications to dbz. Integrating innfeed, adding Xref slaving, + the history cache, the WIP rewrite and various speedups were + also his doing. Provided the tradindexed overview mechanism. + Implemented the O flag in newsfeeds. Did a bunch of early work on + the CVS repository, reorganization of the code, and committing + patches from others. + +Vincent Archer: + Wrote the initial autoconf scripts. + +Forrest J. Cavalier III: + Provided a lot of bug fixes to 1.5.2. He extended the autoconf + setup a lot to work with version 2.0, and has provided a lot of + valuable design input and testing. + +Scott Fritchie: + Wrote the CNFS storage back end. + +Fabien Tassin: + Wrote the innreport package. Implemented the new incoming.conf + configuration file. Added support for nested profile timers. + +Jeremy Nixon: + Wrote the initial patch for Perl filtering of message IDs on IHAVE + or CHECK and other patches related to the filtering code. + +Karl Kleinpaste: + Wrote the experimental code for automatically generating keywords + from incoming articles and putting those keywords in the overview + for the use of readers. + +Dave Hayes: + Along with some bugfixes, Dave wrote the posting-backoff code for + nnrpd and the patches to the perl hooks to make the headers + modifiable. + +Joe Greco: + Wrote the code for measuring the timing of various parts of innd + and the original actived code. + +Sang-yong Suh: + Provided the fuzzy offset technique to dbz. + +Katsuhiro Kondou: + Provided unified overview, the buffindexed overview method, trash + storage method, spool translation method, traditional expire + policy for articles stored through storage API and expireindex, as + well as hundreds of fixes to clean up defects as changes were + made. Did a large amount of man page documentation and clean up. + Has also been a major force in the CVS pool maintenace. + +Russell Vincent: + Expanded inn.conf to make many of the old compile time options + into run time variables. Numerous bug fixes, small feature + enhancements and man updates. + +Darrell Fuhriman: + Provided various bug fixes and contributed to the pre-SM CNFS + development. + +Steve Carrie: + Modified nnrpd to allow detailed client tracking, added the -R + flag to nnrpd. + +Ed Mooring: + Wrote the first Perl filter callbacks into INN. + +Aidan Cully: + Provided the patches to support the new readers.conf file, and + wrote the initial user authenticators and resolvers for the + readers.conf. Provided the patches to support the new + storage.conf format. Added the option to store articles based on + the Expires header. Also added the '@' article exclusion code to + incoming.conf. + +Andrew Gierth: + Contributed improvements to the nnrpd Perl filtering support to + give access to message bodies and support the DROP and SPOOL + keywords in filter returns. + +Russ Allbery: + Has done large amounts of clean-up on various pieces of the system + (especially the documentation and build system), and has helped + with the CVS pool maintenance. Improved the speed and portability + of the Perl filter. Rewrote the tradindexed overview method for + additional robustness. Has done extensive work on libinn, + breaking out common code from other parts of INN. Lots of other + fixes to various parts of INN. + +Kai Henningsen: + Implemented the C and U flags in newsfeeds. + +Julio Sanchez: + Wrote the initial libtool support for INN. + +Igor Timkin: + Added min-queue-connection support to innfeed, added outgoing + volume logging and reporting, and provided a variety of bug + fixes. + +Heath Kehoe: + Various portability and bug fixes, wrote the ovdb overview + mechanism that uses Berkeley DB. + +Richard Todd: + Implemented the timecaf and tradspool storage mechanisms, as well + as many bug fixes and other contributions. + +Brian Kantor: + Wrote the news2mail gateway. + +Ilya Etingof: + Added Python authentication support for nnrpd. + +Kenichi OKADA: + Added preliminary SSL and SASL support for nnrpd. + +Olaf Titz: + Implemented MODE CANCEL support, as well as other patches and bug + fixes. + +Sven Paulus: + Wrote the support for variables in newsfeeds, contributed various + other patches and bug fixes. + +Krischan Jodies: + Wrote the SMB authenticator. + +Alex Kiernan: + Wrote the history API, generalized the timer code in innd and + innfeed into a generic timer library, reworked the NEWNEWS code + and added a history cache, and contributed various other bug fixes. + +Marco d'Itri: + Wrote gpgverify and overhauled controlchan and its modules. Added + IPv6 support to innd and inndstart. Contributed a rewritten + send-uucp. Has also contributed a variety of bug fixes and helped + with testing. + +Jeffrey M. Vinocur: + Broke parts of the interface with nnrpd for authentication programs + into a separate library, added various features to readers.conf, + and wrote various other fixes and feature improvements, + particularly to nnrpd. + +Erik Klavon: + Significantly reworked nnrpd Perl and Python hooks to be more useful + in combination with the readers.conf mechanism. + +Nathan Lutchansky: + Added IPv6 support to innfeed, nnrpd, and supporting programs. + +Also: + +Dave Barr: + Kept INN alive after Rich Salz didn't have the time any more but + before the ISC took over. He released 4 unofficial versions that + provided a good boost to what the ISC started with. Minor work + on 2.0, mostly with example files and minor code tweaks. + +James Brister: + The chief maintainer of INN from when the ISC took over + maintenance through the 2.2 release, James is also the original + author of innfeed and has made fixes, improvements, and feature + additions all over the code. + +Marc Fournier: + Provided various bug fixes and did a lot of work integrating other + peoples patches and looking after the CVS pool. Helped + significantly with the conversion to autoconf. Added the ability + to set connection limits on a per-host basis. + +Joshua M. Thompson + Wrote the original INSTALL documentation. + +The following people helped above and beyond the call of duty with testing +(provided patches, bug reports, suggestions, documentation improvements, +and lobbying): + +Paul Vixie, Robert Elz, Evan Champion, Robert Keller, Barry Bouwsma, +markd@mira.net.au, Ollivier Robert, Kevin Jameson, Heiko W. Rupp, +Fletcher Mattox, Matus Uhlar, Gabor Kiss, Matthias Scheler, +Richard Michael Todd, Trevor Riley, Alex Bligh, J. Porter Clark, +Alan Brown, Bert Hyman, Petter Nilsen, Gary E. Miller, Kim Culhan, +Marc Baudoin, Neal Becker, Bjorn Knutsson, Stephen Marquard, +Frederick Korz, Benedict Lofstedt, Dan Ellis, Joe Ramey, +Odd Einar Aurbakken, Jon Lewis, Dan Riley, Peter Eriksson, Ken Lalonde, +Koichi Mouri, J. Richard Sladkey, Trine Krogstad, Holger Burbach, +Per Hedeland, Larry Rosenman, Andrew Burgess, Michael Brunnbauer, +Mohan Kokal, Robert R. Collier, Mark Hittinger, Miquel van Smoorenburg, +Boyd Lynn Gerber, Yury B. Razbegin, Joe St. Sauver, Heiko Schlichting, +John P. Speno, Scott Gifford, Steve Parr, Robert Kiessling, +Francis Swasey, Paul Tomblin, Florian La Roche, Curt Welch, +Thomas Mike Michlmayr, KIZU Takashi, Michael Hall, Jeff King, +Edward S. Marshall, Michael Schroeder, George Lindholm, Don Lewis, +Christopher Masto, Hiroaki Sengoku, Yury July, Yar Tikhiy, Kees Bakker, +Peter da Silva, Matt McLeod, Ed Korthof, Jan Rychter, Winfried Magerl, +Andreas Lamrecht, Duane Currie, Ian Dickinson, Bettina Fink, +Jochen Erwied, Rebecca Ore, Felicia Neff, Antonio Querubin, Bear Giles, +Christopher P. Lindsey, Winfried Szukalski, Edvard Tuinder, +Frank McConnell, Ilya Kovalenko, Steve Youngs, Jacek Konieczny, +Ilya Voronin, Sergey Babitch, WATANABE Katsuhiro, Chris Caputo, +Thomas Parmelan diff --git a/ChangeLog b/ChangeLog new file mode 100644 index 0000000..397156e --- /dev/null +++ b/ChangeLog @@ -0,0 +1,238 @@ +2008-06-29 iulius + + * lib/perl.c: Use snprintf instead of asprintf. + + * doc/hook-python, doc/pod/hook-python.pod: Use initial capital + letters for head titles. + + * NEWS, doc/pod/news.pod, lib/perl.c: Fixed a hang in Perl hooks on + (at least) HP/PA since Perl 5.10. On such architectures, + pthread_mutex_lock() hangs inside perl_parse() if + PERL_SYS_INIT3() hasn't been called. + + Also rewrite "do" and "eval" calls to use perl_eval_pv(). + +2008-06-25 iulius + + * samples/innreport.conf.in: For two sections in innreport.conf + there is a mismatch between sort function and sorted hash. + + Thanks to Alexander Bartolich for this patch. + +2008-06-24 iulius + + * NEWS, doc/pod/news.pod, scripts/innreport_inn.pm: Fix another + long-standing bug in innreport which prevented it from correctly + reporting innfeed log messages. + + * scripts/innreport_inn.pm: Suppress a few other nnrpd and + controlchan notices in innreport. + +2008-06-23 iulius + + * NEWS, doc/pod/news.pod: Add changelog for innreport. + + * NEWS, doc/pod/news.pod: Changelog for INN 2.4.5 :-) + + * doc/hook-python: Update the auto-generated documentation for INN + 2.4.5. + + * scripts/innreport_inn.pm: Fix a long-standing bug in innreport + which prevented it from correctly reporting nnrpd log messages. + + * scripts/innreport_inn.pm: Suppress a few warnings in innreport + (especially from Python hooks and nnrpd). Also backport some + other improvements made in TRUNK. + + * site, site/.cvsignore, site/Makefile: Install nnrpd.py which + previously was not. + + * MANIFEST, samples/nnrpd_access.py, samples/nnrpd_auth.py, + samples/nnrpd_dynamic.py, site, site/.cvsignore, site/Makefile: + Update the Python nnrpd filter. New samples for access and + dynamic hooks. + +2008-06-22 iulius + + * samples/filter_innd.py: Update the Python innd filter. + + * doc/pod/hook-python.pod: Typo (canceled -> cancelled). + + * samples/nnrpd_access_wrapper.py, samples/nnrpd_auth_wrapper.py, + samples/nnrpd_dynamic_wrapper.py: Update old Python wrappers. + + * samples/INN.py, samples/nnrpd.py: Update stub Python scripts. Fix + a compilation problem with INN.py (undefined variable) and add + missing methods. + + * doc/hook-python, doc/man/readers.conf.5, doc/pod/hook-python.pod, + doc/pod/readers.conf.pod: Update POD documentation for Python + hooks. It is a complete proof-reading. + + * nnrpd/python.c: No need to check the existence of methods not + used by the hooked script. + + * innd/python.c, nnrpd/python.c: Fix an issue with Python exception + handling. + + * nnrpd/python.c: Fix typos. + + * nnrpd/python.c: Fix a segfault when one closes and then reopens + Python in the same process. files and dynamic_file are still + pointing to the old freed memory and INN blithely tries to write + to it. Thanks to Russ Allbery for the patch. + +2008-06-21 iulius + + * innd/python.c: Better be more careful when decrementing the + reference count for these objects. + +2008-06-16 iulius + + * doc/external-auth, doc/hook-perl, doc/hook-python, + doc/man/active.5, doc/man/active.times.5, doc/man/auth_krb5.8, + doc/man/auth_smb.8, doc/man/ckpasswd.8, doc/man/control.ctl.5, + doc/man/convdate.1, doc/man/cycbuff.conf.5, + doc/man/distrib.pats.5, doc/man/domain.8, doc/man/expire.ctl.5, + doc/man/expireover.8, doc/man/fastrm.1, doc/man/grephistory.1, + doc/man/ident.8, doc/man/inews.1, doc/man/inn.conf.5, + doc/man/innconfval.1, doc/man/innd.8, doc/man/inndf.8, + doc/man/inndstart.8, doc/man/innmail.1, doc/man/innupgrade.8, + doc/man/libauth.3, doc/man/libinnhist.3, doc/man/list.3, + doc/man/mailpost.8, doc/man/makehistory.8, doc/man/motd.news.5, + doc/man/newsfeeds.5, doc/man/ninpaths.8, doc/man/nnrpd.8, + doc/man/ovdb.5, doc/man/ovdb_init.8, doc/man/ovdb_monitor.8, + doc/man/ovdb_server.8, doc/man/ovdb_stat.8, + doc/man/passwd.nntp.5, doc/man/qio.3, doc/man/radius.8, + doc/man/radius.conf.5, doc/man/rc.news.8, doc/man/readers.conf.5, + doc/man/sasl.conf.5, doc/man/sendinpaths.8, doc/man/simpleftp.1, + doc/man/sm.1, doc/man/subscriptions.5, doc/man/tdx-util.8, + doc/man/tst.3, doc/man/uwildmat.3: Update version number for INN + 2.4.5 documentation. + +2008-06-11 iulius + + * support/config.guess, support/config.sub: Update support files + for autoconf to their last stable version. + +2008-06-10 iulius + + * innd/python.c: Fix the name of a variable used in Python filters. + +2008-06-09 iulius + + * innd/python.c: Fix a bug when reloading Python filters. They + might not be correctly reloaded. They must be reimported before + being reloaded. + + * nnrpd/python.c: Fix a segfault when generating access groups with + embedded Python filters for nnrpd. Thanks to David Hlacik for the + bug report. + +2008-06-08 iulius + + * frontends/pullnews.in: Two minor issues resolved with this patch + by Geraint Edwards: * an off-by-one error on the limit to the + amount of articles to get; * when an article is not available, we + may have redundantly retried that article. + +2008-06-07 iulius + + * doc/pod/cycbuff.conf.pod, doc/pod/hook-perl.pod, + doc/pod/hook-python.pod, innd/python.c, samples/filter_innd.pl: + Fix the use of "ctlinnd reload something 'reason'" in + documentation. + +2008-06-05 iulius + + * doc/hook-perl, doc/hook-python, doc/pod/hook-perl.pod, + doc/pod/hook-python.pod, innd/innd.c, innd/innd.h, + samples/filter_innd.py: Add access to several new headers within + Perl and Python hooks for innd. Thanks to Matija Nalis for the + patch. + + Also update the POD documentation and the Python sample. + + * doc/man/pullnews.1, doc/pod/pullnews.pod, frontends/pullnews.in: + A new improved version of pullnews. Great thanks to Geraint A. + Edwards for all his work. He added no more than 16 flags, fixed + some bugs and integrated the backupfeed contrib script by Kai + Henningsen, adding again 6 other flags. + + A long-standing but very minor bug in the -g option was + especially fixed and items from the to-do list implemented. + + From TODO: + + + reset highwater mark to match server (-w) + reset highwater + mark to zero (also -w) + add group to config (-G) + drop articles + with headers matching (or not matching) regexp (-m) + + From backupfeed: + + + pull only a proportion (factor) of articles (-f) + sleeps + between articles/groups (-z/-Z) + Path: fake hop insert (-F) + + NNTP connection timeout (-N) + overall session timeout (-S) + + Other new flags/features: + + -l logfile log to logfile (rather than /dev/null when rnews'ing!) + -s host:port add local port option (can use -p already) -t + retries attempt connect to upstream retries times -T retry_pause + wait between retries -k checkpt checkpoint the config file every + checkpt arts -C width when writing the progress bar - use width + columns -d debug_level self-explanatory -M max_arts only process + max_arts articles per run -H headers remove these headers from + articles -Q quietness set how quiet we are -R be a reader -n + no-op -P paths feed articles depending on number of hops in Path: + +2008-05-25 iulius + + * control/modules/newgroup.pl: Fix a Perl warning. + +2008-05-24 iulius + + * nnrpd/tls.c: When an article of a size greater than remaining + stack is retrieved via SSL, a segmentation fault will occur due + to the use of alloca(). The below patch uses heap based realloc() + instead of stack based alloca(), with a static buffer growing as + needed. It uses realloc() instead of malloc() for performance + reasons since this function is called frequently. The caveat is + that the memory is never free()'ed, so if more correct code is + desired, it should be adjusted. + + Thanks to Chris Caputo for this patch. + +2008-05-19 iulius + + * innd/Makefile, nnrpd/line.c: Implementation of the "alarm signal" + around SSL_read so that to prevent dead connections from leading + nnrpd processes to wait forever in SSL_read(). "clienttimeout" + now also works on SSL connections. + + Thanks to Matija Nalis for the patch. + + * nnrpd/tls.c: Implementation on systems that support it of + SO_KEEPALIVE in SSL TCP connections, allowing system detection + and closing the dead TCP SSL connections automatically after + system-specified time (usually at least 2 hours as recommended by + RFC (on Linux, see /proc/sys/net/ipv4/tcp_keepalive_*). + + Thanks to Matija Nalis for the patch. + +2008-05-18 iulius + + * innfeed/host.c: Fix a problem of undefined constant. + +2008-05-14 iulius + + * innfeed/host.c: Fix a bug in ipAddrs which contained thrice the + same IPs. Rotating the peer IP addresses was a bit slower than it + could be. + + Thanks, D. Stussy, for having seen that. Miquel van Smoorenburg + provided the patch. + + * Makefile.global.in: Bump the revision number to 2.4.5 (in case it + is released one day). + diff --git a/HACKING b/HACKING new file mode 100644 index 0000000..87a76e3 --- /dev/null +++ b/HACKING @@ -0,0 +1,708 @@ +Hacking INN + + This file is for people who are interested in making modifications to + INN. Normal users can safely skip reading it. It is intended primarily + as a guide, resource, and accumulation of tips for maintainers and + contributors, and secondarily as documentation of some of INN's + internals. + + This is $Revision: 7736 $ dated $Date: 2006-04-15 04:52:06 +0200 (Sat, + 15 Apr 2006) $. + + First of all, if you plan on working on INN source, please start from + the current development tree. There may be significant changes from the + previous full release, so starting from development sources will make it + considerably easier to integrate your work. You can get nightly + snapshots of the current development source from ftp.isc.org in + /isc/inn/snapshots (the snapshots named inn-CURRENT-*.tar.gz), or you + can get the current CVS tree by using CVSup (see "Using CVSup"). + +Configuring and Portability + + All INN code should be written expecting ANSI C and POSIX. There is no + need to attempt to support pre-ANSI compilers, and ANSI-only features + such as , string concatenation, #elif, and token pasting may + be used freely. So far as possible, INN is written to attempt to be + portable to any system new enough that someone is likely to want to run + a news server on it, but whenever possible this portability should be + provided by checking for standard behavior in configure and supplying + replacements for standard functions that are missing. + + When there is a conflict between ANSI C and C99, INN code should be + written expecting C99 and autoconf used to patch up the differences. + + Try to avoid using #ifdef and the like in the middle of code as much as + possible. Instead, try to isolate the necessary portability bits and + include them in libinn or at least in conditional macros separate from + the code. Trying to read code littered with conditional compilation + directives is much more difficult. + + The shell script configure at the top level of the source tree is + generated by autoconf from configure.in, and include/config.h.in is + generated by autoheader from configure.in and include/acconfig.h. At + configure time, configure generates include/config.h and several other + files based on options it was given and what it discovers about the + target system. + + All modifications to configure should instead be made to configure.in. + Similarly, modifications to include/config.h.in should instead be made + to include/acconfig.h. The autoconf manual (available using info + autoconf if you have autoconf and the GNU info utilities installed on + your system) is a valuable reference when making any modifications. + + To regenerate configure, just run "autoconf". To regenerate + include/config.h.in, run: + + autoheader -l include + + to tell it where to find acconfig.h. Please don't include patches to + either configure or include/config.h.in when sending patches to INN; + instead, note in your patch that those files must be regenerated. + + The generated files are checked into the CVS repository so that people + working on INN don't have to have autoconf on their system, and to make + packaging easier. + + At the time of this writing, autoconf 2.13 is required. + + The supporting files for autoconf are in the support subdirectory, + including the files config.guess and config.sub to determine the system + name and and ltmain.sh for libtool support. The latter file comes from + the libtool distribution; the canonical version of the former two are + available from ftp.gnu.org in /gnu/config. In addition, m4/libtool.m4 + is just a copy of libtool.m4 from the libtool distribution. (Using + libtool without using automake requires a few odd hacks.) These files + used to be on a separate vendor branch so that we could make local + modifications, but local modifications have not been necessary for some + time. Now, new versions can just be checked in like any other file + modifications. + + INN should not compile with libtool by default, only when requested, + since otherwise normal compilations are quite slow. (Using libtool is + not without some cost.) Basic compilation with libtool works fine as of + this writing, with both static and shared compiles, but the dependencies + aren't quite right for make -j using libtool. + +Documentation + + INN's documentation is currently somewhat in a state of flux. The vast + majority is still in the form of man pages written directly in nroff. + Some parts of the documentation have been rewritten in POD; that + documentation can be found in doc/pod. The canonical source for README, + INSTALL, NEWS, doc/hook-perl, doc/hook-python, and this file are also in + POD. + + If you're modifying some part of INN's documentation and see that it has + a POD version in doc/pod, it's preferred if you can make the + modifications to the POD source and then regenerate the derived files. + For a quick introduction to POD, see the perlpod(1) man page on your + system (it should be installed if you have Perl installed). + + When writing new documentation, write in whatever format you care to; if + necessary, we can always convert it to POD or whatever else we want to + use. Having the documentation exist in *some* form is more important + than what language you write it in. If you really don't have any + particular preference, there's a slight preference currently for POD. + + If you use POD or regenerate POD documentation, please install something + close to the latest versions of the POD processing utilities to avoid + changes to the documentation depending on who generated it last. You + can find the latest version on CPAN (ftp.perl.org or another mirror) in + modules/by-module/Pod. You'll need PodParser (for versions of Perl + before 5.6.1; 5.6.1 and later come with a recent enough version) and the + latest version of podlators. For versions of Perl earlier than 5.005, + you'll also need File::Spec in modules/by-module/File. + + podlators 1.25 or later will build INN's documentation without + significant changes from the versions that are checked into the + repository. + + There are Makefile rules in doc/pod/Makefile to build all of the + documentation whose master form is POD; if you add additional + documentation, please add a rule there as well. Documentation should be + generated by cd'ing to doc/pod and typing "make file" where "file" is + the relative path to the documentation file. This will get all of the + various flags right for pod2text or pod2man. + +Error Handling + + INN has a set of generic error handling routines that should be used as + much as possible so that the same syntax can be used for reporting + errors everywhere in INN. The four basic functions are warn, syswarn, + die, and sysdie; warn prints or logs a warning, and die does the same + and then exits the current program. The sys* versions add a colon, a + space, and the value of strerror(errno) to the end of the message, and + should be used to report failing system calls. + + All of the actual error reporting is done via error handlers, and a + program can register its own handlers in addition to or instead of the + default one. The default error handler (error_log_stderr) prints to + stderr, prepending the value of error_program_name if it's set to + something other than NULL. Three other error handlers are available, + error_log_syslog_crit, error_log_syslog_err, and + error_log_syslog_warning, which log the message to syslog at LOG_CRIT, + LOG_ERR, or LOG_WARNING priority, respectively. + + There is a different set of error handlers for warn/syswarn and + die/sysdie. To set them, make calls like: + + warn_set_handlers(2, error_log_stderr, error_log_syslog_warning); + die_set_handlers(2, error_log_stderr, error_log_syslog_err); + + The first argument is the number of handlers, and the remaining + arguments are pointers to functions taking an int (the length of the + formatted message), a const char * (the format), a va_list (the + arguments), and an int that's 0 if warn or die was called and equal to + the value of errno if syswarn or sysdie was called. The length of the + formatted message is obtained by calling vsnprintf with the provided + format and arguments, and therefore is reliable to use as the size of a + buffer to malloc to hold the result of formatting the message provided + that vsnprintf is used to format it (warning: the system vsprintf may + produce more output under some circumstances, so always use vsnprintf). + + The error handler can do anything it wishes; each error handler is + called in the sequence given. Error handlers shouldn't call warn or die + unless great caution is taken to prevent infinite recursion. Also be + aware that sysdie is called if malloc fails in xmalloc, so if the error + handler needs to allocate memory, it must not use xmalloc or a related + function to do so and it shouldn't call die to report failure. The + default syslog handlers report memory allocation failure to stderr and + exit. + + Finally, die and sysdie support an additional handler that's called + immediate before exiting, takes no arguments, and returns an int which + is used as the argument for exit. It can do any necessary global + cleanup, call abort instead to generate a core dump or the like. + + The advantage of using this system everywhere in INN is that library + code can use warn and die to report errors and each calling program can + set up the error handlers as appropriate to make sure the errors go to + the right place. The default handler is fine for interactive programs; + for programs that run from interactive scripts, adding something like: + + error_program_name = "program"; + + to the beginning of main (where program is the name of the program) will + make it easier to figure out which program the script calls is failing. + For programs that may also be called non-interactively, like inndstart, + one may want to set up handlers like: + + warn_set_handlers(2, error_log_stderr, error_log_syslog_warning); + die_set_handlers(2, error_log_stderr, error_log_syslog_err); + + Finally, for daemons and other non-interactive programs, one may want to + do: + + warn_set_handlers(1, error_log_syslog_warning); + die_set_handlers(1, error_log_syslog_err); + + to report errors only via syslog. (Note that if you use syslog error + handlers, the program should call openlog first thing to make sure they + are logged with the right facility.) + + For historical reasons, error messages that are fatal to the news + subsystem are logged at the LOG_CRIT priority, and therefore die in innd + should use error_log_syslog_crit. + +Test Suite + + The test suite for INN is located in the tests directory and is just + getting started. The test suite consists of a set of programs listed in + tests/TESTS and the scaffolding in the runtests program. + + Adding new tests is very straightforward and very flexible. Just write + a program that tests some part of INN, put it in a directory under tests + named after the part of INN it's testing (all the tests so far are in + lib because they're testing libinn routines), and have it output first a + line containing the count of test cases in that file, and then for each + test a line saying "ok n" or "not ok n" where n is the test case number. + (If a test is skipped for some reason, such as a test of an optional + feature that wasn't compiled into INN, the test program should output + "ok n # skip".) Add any rules necessary to build the test to + tests/Makefile (note that for simplicity it doesn't recurse into + subdirectories) and make sure it creates an executable ending in .t. + Then add the name of the test to tests/TESTS, without the .t ending. + + One naming convention: to distinguish more easily between e.g. + lib/error.c (the implementation) and tests/lib/error-t.c (the test + suite), we add -t to the end of the test file names. So + tests/lib/error-t.c is the source that compiles into an executable + tests/lib/error.t which is run by putting a line in tests/TESTS of just + "lib/error". + + Note that tests don't have to be written in C; in fact, lib/xmalloc.t is + just a shell script (that calls a supporting C program). Tests can be + written in shell or Perl (but other languages should be avoided because + someone who wants to run the test suite may not have it) and just have + to follow the above output conventions. + + Additions to the test suite, no matter how simple, are very welcome. + +Makefiles + + All INN makefiles include Makefile.global at the top level, and only + that makefile is a configure substitution target. This has the + disadvantage that configure's normal support for building in a tree + outside of the source tree doesn't work, but it has the significant + advantage of making configure run much faster and allowing one to run + make in any subdirectory and pick up all the definitions and settings + from the top level configuration. + + All INN makefiles should also set $(top) to be the path to the top of + the build directory (usually relative). This path is used to find + various programs like fixscript and libtool so that the same macros (set + in Makefile.global) can be used all over INN. + + The format of INN's makefiles is mostly standardized; the best examples + of the format are probably frontends/Makefile and backends/Makefile, at + least for directories with lots of separate programs. The ALL variable + holds all the files that should be generated, EXTRA those additional + files that were generated by configure, and SOURCES the C source files + for generating tag information. + + There are a set of standard installation commands defined in make + variables by Makefile.global, and these should be used for all file + installations. See the comment blocks in Makefile.global.in for + information on what commands are available and when they should be used. + There are also variables set for each of the installation directories + that INN uses, for use in building the list of installed paths to files. + + Each subdirectory makefile should have the targets all (the default), + clean, clobber, install, tags, and profiled. The tags target generates + vi tags files, and the profiled target generates a profiling version of + the programs (although this hasn't been tested much recently). These + rules should be present and empty in those directories where they don't + apply. + + Be sure to test compiling with both static and dynamic libraries and + make sure that all the libtool support works correctly. All linking + steps, and the compile steps for all library source, should be done + through $(LIBTOOL) (which will be set to empty in Makefile.global if + libtool support isn't desired). + +Scripts + + INN comes with and installs a large number of different scripts, both + Bourne shell and Perl, and also comes with support for Tcl scripts + (although it doesn't come with any). Shell variables containing both + configure-time information and configuration information from inn.conf + are set by the innshellvars support libraries, so the only + system-specific configuration that should have to be done is fixing the + right path to the interpretor and adding a line to load the appropriate + innshellvars. + + support/fixscript, built by configure, does this. It takes a .in file + and generates the final script (removing the .in) by fixing the path to + the interpretor on the first line and replacing the second line, + whatever it is, with code to load the innshellvars appropriate for that + interpretor. (If invoked with -i, it just fixes the interpretor path.) + + Scripts should use innshellvars (via fixscript) to get the right path + and the right variables whenever possible, rather than having configure + substitute values in them. Any values needed at run-time should instead + be available from all of the different innshellvars. + + See the existing scripts for examples of how this is done. + +Include Files + + Include files relevant to all of INN, or relevant to the two libraries + built as part of INN (the utility libinn library and the libstorage + library that contains all storage and overview functions) are found in + the include directory; other include files relevant only to a portion of + INN are found in the relevant directory. + + Practically all INN source files will start with: + + #include "config.h" + #include "clibrary.h" + + The first picks up all defines generated by autoconf and is necessary + for types that may not be present on all systems (uid_t, pid_t, size_t, + int32_t, and the like). It therefore should be included before any + other headers that use those types, as well as to get general + configuration information. + + The second is portably equivalent to: + + #include + #include + #include + #include + #include + #include + #include + #include + + except that it doesn't include headers that are missing on a given + system, replaces functions not found on the system with the INN + equivalents, provides macros that INN assumes are available but which + weren't found, and defines some additional portability things. Even if + this is more headers than the source file actually needs, it's generally + better to just include clibrary.h rather than trying to duplicate the + autoconf-driven hackery that it does to do things portably. The primary + exception is for source files in lib that only define a single function + and are used for portability; those may want to include only config.h so + that they can be easily used in other projects that use autoconf. + config.h is a fairly standard header name for this purpose. + + clibrary.h does also include config.h, but it's somewhat poor form to + rely on this; it's better to explicitly list the header dependencies for + the benefit of someone else reading the code. + + There are portable wrappers around several header files that have known + portability traps or that need some fixing up on some platforms. Look + in include/portable and familiarize yourself with them and use them + where appropriate. + + Another frequently included header file is libinn.h, which among other + things defines xmalloc(), xrealloc(), xstrdup(), and xcalloc(), which + are checked versions of the standard memory allocation routines that + terminate the program if the memory allocation fails. These should + generally always be used instead of the regular C versions. libinn.h + also provides various other utility functions that are frequently used. + + paths.h includes a wide variety of paths determined at configure time, + both default paths to various parts of INN and paths to programs. Don't + just use the default paths, though, if they're also configurable in + inn.conf; instead, call ReadInnConf() and use the global innconf + structure. + + Other files in include are interfaces to particular bits of INN library + functionality or are used for other purposes; see the comments in each + file. + + Eventually, the header files will be separated into installed header + files and uninstalled header files; the latter are those headers that + are used only for compiling INN and aren't useful for users of INN's + libraries (such as clibrary.h). All of the installed headers will live + in include/inn and be installed in a subdirectory named inn in the + configured include directory. This conversion is still in progress. + + When writing header files, remember that C reserves all identifiers + beginning with two underscores and all identifiers beginning with an + underscore and a capital letter for the use of the implementation; don't + use any identifiers with names like that. Additionally, any identifier + beginning with an underscore and a lower-case letter is reserved in file + scope, which means that such identifiers can only be used by INN for the + name of structure members or function arguments in function prototypes. + + Try to pay attention to the impact of a header file on the program + namespace, particularly for installed header files in include/inn. All + symbols defined by a header file should ideally begin with INN_, inn_, + or some other unique prefix indicating the subsystem that symbol is part + of, to avoid accidental conflicts with symbols defined by the program + that uses that header file. + +Coding Style + + INN has quite a variety of coding styles intermixed. As with all + programs, it's preferrable when making minor modifications to keep the + coding style of the code you're modifying. In INN, that will vary by + file. (Over time we're trying to standardize on one coding style, so + changing the region you worked on to fit the general coding style is + also acceptable). + + If you're writing a substantial new piece of code, the prevailing + "standard" INN coding style appears to be something like the following: + + * Write in regular ANSI C whenever possible. Use the normal ANSI and + POSIX constructs and use autoconf or portability wrappers to fix + things up beforehand so that the code itself can read like regular + ANSI or POSIX code. Code should be written so that it works as + expected on a modern platform and is fixed up with portability tricks + for older platforms, not the other way around. You may assume an + ANSI C compiler. + + Try to use const wherever appropriate. Don't use register; modern + compilers will do as good of a job as you will in choosing what to + put into a register. Don't bother with restrict (at least yet). + + * Use string handling functions that take counts for the size of the + buffer whenever possible. This means using snprintf in preference to + sprintf and using strlcpy and strlcat in preference to strcpy and + strcat. Also, use strlcpy and strlcat instead of strncpy and strncat + unless the behavior of the latter is specifically required, as it is + much easier to audit uses of the former than the latter. (strlcpy is + like strncpy except that it always nul-terminates and doesn't fill + the rest of the buffer with nuls, making it more efficient. strlcat + is like strncat except that it always nul-terminates and it takes the + total size of the buffer as its third argument rather than just the + amount of space left.) All of these functions are guaranteed to be + available; there are replacements in lib for systems that don't have + them. + + * Avoid #ifdef and friends whenever possible. Particularly avoid using + them in the middle of code blocks. Try to hide all portability + preprocessor magic in header files or in portability code in lib. + When something just has to be done two completely different ways + depending on the platform or compile options or the like, try to + abstract that functionality out into a generic function and provide + two separate implementations using #ifdef; then the main code can + just call that function. + + If you do have to use preprocessor defines, note that if you always + define them to either 0 or 1 (never use #define without a second + argument), you can use the preprocessor define in a regular if + statement rather than using #if or #ifdef. Make use of this instead + of #ifdef when possible, since that way the compiler will still + syntax-check the other branch for you and it makes it far easier to + convert the code to use a run-time check if necessary. + (Unfortunately, this trick can't be used if one branch may call + functions unavailable on a particular platform.) + + * Avoid uses of fixed-width buffers except in performance-critical + code, as it's harder to be sure that such code is correct and it + tends to be less flexible later on. If you need a reusable, + resizable memory buffer, one is provided in lib/buffer.c. + + * Avoid uses of static variables whenever possible, particularly in + libraries, because it interferes with making the code re-entrant down + the road and makes it harder to follow what's going on. Similarly, + avoid using global variables whenever possible, and if they are + required, try to wrap them into structures that could later be + changed into arguments to the affected functions. + + * Roughly BSD style but with four-space indents. This means no space + before the parens around function arguments, open brace on the same + line as if/while/for, and close and open brace on the same line as + else). + + * Introductory comments for functions or files are generally written + as: + + /* + ** Introductory comment. + */ + + Other multiline comments in the source are generally written as: + + /* This is a + multiline comment. */ + + Comments before functions saying what they do are nice to have. In + general, the RCS/CVS Id tag is on the first line of each source file + since it's useful to know when a file was last modified. + + * Checks for NULL pointers are preferrably written out explicitly; in + other words, use: + + if (p != NULL) + + rather than: + + if (p) + + to make it clearer what the code is assuming. + + * It's better to always put the body of an if statement on a separate + line, even if it's only a single line. In other words, write: + + if (p != NULL) + return p; + + and not: + + if (p != NULL) return p; + + This is in part for a practical reason: some code coverage analysis + tools like purecov will count the second example above as a single + line and won't notice if the condition always evaluates the same way. + + * Plain structs make perfectly reasonable abstract data types; it's not + necessary to typedef the struct to something else. Structs are + actually very useful for opaque data structures, since you can + predeclare them and then manipulate pointers to them without ever + having to know what the contents look like. Please try to avoid + typedefs except for function pointers or other extremely confusing + data types, or for data types where we really gain some significant + data abstraction from hiding the underlying data type. Also avoid + using the _t suffix for any type; all types ending in _t are reserved + by POSIX. For typedefs of function pointer types, a suffix of _func + usually works. + + This style point is currently widely violated inside of INN itself; + INN originally made extensive use of typedefs. + + * When noting something that should be improved later, add a comment + containing "FIXME:" so that one can easily grep for such comments. + + INN's indentation style roughly corresponds to that produced by GNU + indent 2.2.6 with the following options: + + -bad -bap -nsob -fca -lc78 -cd41 -cp1 -br -ce -cdw -cli0 -ss -npcs + -ncs -di1 -nbc -psl -brs -i4 -ci4 -lp -ts8 -nut -ip5 -lps -l78 -bbo + -hnl + + Unfortunately, indent currently doesn't get everything right (it has + problems with spacing around struct pointer arguments in functions, + wants to put in a space between a dereference of a function pointer and + the arguments to the called function, misidentifies some macro calls as + being type declarations, and fouls up long but simple case statements). + It would be excellent if someday we could just run all of INN's code + through indent routinely to enforce a consistant coding style, but + indent isn't quite ready for that. + + For users of emacs cc-mode, use the "bsd" style but with: + + (setq c-basic-offset 4) + + Finally, if possible, please don't use tabs in source files, since they + can expand differently in different environments. In particular, please + try not to use the mix of tabs and spaces that is the default in emacs. + If you use emacs to edit INN code, you may want to put: + + ; Use only spaces when indenting or centering, no tabs. + (setq-default indent-tabs-mode nil) + + in your ~/.emacs file. + + Note that this is only a rough guideline and the maintainers aren't + style nazis; we're more interested in your code contribution than in how + you write it. + +Using CVSup + + If you want to get updated INN source more easily or more quickly than + by downloading nightly snapshots, or if you want to see the full CVS + history, you may want to use CVSup to download the source. CVSup is a + client and server designed for replicating CVS repositories between + sites. + + Unfortunately, CVSup is written in Modula-3, so getting a working binary + can be somewhat difficult. Binaries are available in the *BSD ports + collection or (for a wide variety of different platforms) available from + and its mirrors. + Alternately, you can get a compiler from + (this is more actively maintained than the DEC Modula-3 compiler) and + the source from . + + After you have the CVSup client, you need to have space to download the + INN repository and space for CVSup to store its data files. You also + need to write a configuration file (a supfile) for CVSup. The following + supfile will download the latest versions from the mainline source: + + *default host=inn-cvs.isc.org + *default base= + *default prefix= + *default release=cvs + *default tag=. + *default delete use-rel-suffix + inn + + where should be a directory where CVSup can put its data + files and is where the downloaded source will go (it + will be put into a subdirectory named inn). If you want to pull down + the entire CVS repository instead (warning: this is much larger than + just the latest versions of the source), delete the "*default tag=." + line. The best way to download the CVS repository is to download it + into a portion of a locally-created CVS repository, so that then you can + perform standard CVS operations (like cvs log) against the downloaded + repository. Creating your own local CVS repository is outside the scope + of this document. + + Note that only multiplexed mode is supported (this mode should be the + default). + + For more general information on using CVSup, see the FreeBSD page on it + at . + +Making a Release + + This is a checklist that INN maintainers should go through when + preparing a new release of INN. + + 1. If making a major release, branch the source tree and create a new + STABLE branch tag. This branch will be used for minor releases + based on that major release and can be done a little while before + the .0 release of that major release. At the same time as the + branch is cut, tag the trunk with a STABLE--branch marker + tag so that it's easy to refer to the trunk at the time of the + branch. + + 2. Update doc/pod/news.pod and regenerate NEWS. Be more detailed for a + minor release than for a major release. For a major release, also + add information on how to upgrade from the last major release, + including anything special to be aware of. (Minor releases + shouldn't require any special care when upgrading.) + + 3. Make sure that support/config.sub and support/config.guess are the + latest versions (from ). See the + instructions in "Configuring and Portability" for details on how to + update these files. + + 4. Make sure that samples/control.ctl is in sync with the master + version at . + + 5. Check out a copy of the release branch. It's currently necessary to + run configure to generate Makefile.global. Then run "make + check-manifest". The only differences should be files that are + generated by configure; if there are any other differences, fix the + MANIFEST. + + 6. Run "make release". Note that you need to have a copy of svn2cl + from to do this; at least + version 0.7 is required. Start the ChangeLog at the time of the + previous release. (Eventually, the script will be smart enough to + do this for you.) + + 7. Make the resulting tar file available for testing in a non-listable + directory on ftp.isc.org and announce its availability on + inn-workers. Install it on at least one system and make sure that + system runs fine for at least a few days. This is also a good time + to send out a draft of the release announcement to inn-workers for + proof-reading. + + 8. Generate a diff between this release and the previous release if + feasible (always for minor releases, possibly not a good idea due to + the length of the diff for major releases). + + 9. Move the release into the public area of the ftp site and update the + inn.tar.gz link. Make an MD5 checksum of the release tarball and + put it on the ftp site as well, and update the inn.tar.gz.md5 link. + Put the diff up on the ftp site as well. Contact the ISC folks to + get the release PGP-signed. Possibly move older releases off into + the OLD directory. + + 10. Announce the new release on inn-announce and in news.software.nntp. + + 11. Tag the checked-out tree that was used for generating the release + with a release tag (INN-). + + 12. Bump the revision number in Makefile.global.in. + +References + + Some additional references that may be hard to find and may be of use to + people working on INN: + + + The home page for the IETF NNTP standardization effort, including + links to the IETF NNTP working group archives and copies of the + latest drafts of the new NNTP standard. The old archived mailing + list traffic contains a lot of interesting discussion of why NNTP is + the way it is. + + + The archives for the USEFOR IETF working group, the working group + for the RFC 1036 replacement (the format of Usenet articles). Also + contains a lot of references to other relevant work, such as the RFC + 822 replacement work. + + + Forrest Cavalier provides several tools for following INN + development at this page and elsewhere in the Usenet RKT. Under + here is a web-accessible checked-out copy of the current INN source + tree and pointers to how to use CVSup. + + + The standards for large file support on Unix that are being + generally implemented by vendors. INN sort of partially uses these, + but a good full audit of the code to check them should really be + done and there are occasional problems. + + + A primer on IPv6 with pointers to the appropriate places for more + technical details as needed, useful when working on IPv6 support in + INN. + diff --git a/INSTALL b/INSTALL new file mode 100644 index 0000000..eceb489 --- /dev/null +++ b/INSTALL @@ -0,0 +1,1527 @@ +Welcome to INN 2.4! + + Please read this document thoroughly before trying to install INN. + You'll be glad you did. + + If you are upgrading from a major release of INN prior to 2.3, it is + recommended that you make copies of your old configuration files and use + them as guides for doing a clean installation and configuration of 2.4. + Many config files have changed, some have been added, and some have been + removed. You'll find it much easier to start with a fresh install than + to try to update your old installation. This is particularly true if + you're upgrading from a version of INN prior to 2.0. + + If you are upgrading from INN 2.3 or later, you may be able to just + update the binaries, scripts, and man pages by running: + + make update + + after building INN and then comparing the new sample configuration files + with your current ones to see if anything has changed. If you take this + route, the old binaries, scripts, and man pages will be saved with an + extension of ".OLD" so that you can easily back out. Be sure to + configure INN with the same options that you used previously if you take + this approach (in particular, INN compiled with --enable-largefiles + can't read the data structures written by INN compiled without that + flag, and vice versa). If you don't remember what options you used but + you have your old build tree, look at the comments at the beginning of + config.status. + + If you made ckpasswd setuid root so that you could use system passwords, + you'll have to do that again after make update. (It's much better to + use PAM instead if you can.) + + If you use "make update" to upgrade from INN 2.3, also look at the new + sample configuration files in samples to see if there are new options of + interest to you. In particular, control.ctl has been updated and + inn.conf has various new options. + + For more information about recent changes, see NEWS. + +Supported Systems + + As much as possible, INN is written in portable C and should work on any + Unix platform. It does, however, make extensive use of mmap(2) and + certain other constructs that may be poorly or incompletely implemented, + particularly on very old operating systems. + + INN has been confirmed to work on the following operating systems: + + AIX 4.3 + FreeBSD 2.2.x and up + HP-UX 10.20 and up + Linux 2.x (tested with libc 5.4, glibc 2.0 and up) + Mac OS X 10.2 and up + NetBSD 1.6 and up + OpenBSD 2.8 and up + SCO 5.0.4 (tested with gcc 2.8.1, cc) + Solaris 2.5.x and up + UnixWare 7.1 + UX/4800 R11 and up + + If you have gotten INN working on an operating system other than the + ones listed above, please let us know at . + +Before You Begin + + INN requires several other packages be installed in order to be fully + functional (or in some cases, to work at all): + + * In order to build INN, you will need a C compiler that understands + ANSI C. If you are trying to install INN on an operating system that + doesn't have an ANSI C compiler (such as SunOS), installing gcc is + recommended. You can get it from or its + mirrors. INN is tested with gcc more thoroughly than with any other + compiler, so even if you have another compiler available, you may wish + to use gcc instead. + + * Currently, in order to build INN, you will need an implementation of + yacc. GNU bison (from or its mirrors) + will work fine. We hope to remove this requirement in the future. + + * INN requires at least Perl 5.004_03 to build and to run several + subsystems. INN is tested primarily with newer versions of Perl, so + it's generally recommended that you install the latest stable + distribution of Perl before compiling INN. For instructions on + obtaining and installing Perl, see + . Note that you + may need to use the same compiler and options (particularly largefile + support) for Perl and INN. + + If you're using a version of Perl prior to 5.6.0, you may need to make + sure that the Perl versions of your system header files have been + generated in order for Sys::Syslog to work properly (used by various + utility programs, including controlchan). To do this, run the + following two commands: + + # cd /usr/include + # h2ph * sys/* + + An even better approach is to install Perl 5.6.1 or later, which have + a fixed version of Sys::Syslog that doesn't require this (as well as + many other improvements over earlier versions of Perl). + + * The INN Makefiles use the syntax "include FILE", rather than the + syntax expected by some BSDish systems of ".include ". If your + system expects the latter syntax, the recommended solution is to + install GNU make from . You may have GNU + make already installed as gmake, in which case using gmake rather than + make to build INN should be sufficient. + + * If you want to enable support for authenticated control messages (this + is not required, but is highly recommended for systems carrying public + Usenet hierarchies) then you will need to install some version of PGP. + The recommended version is GnuPG, since it's actively developed, + supports OpenPGP, is freely available and free to use for any purpose + (in the US and elsewhere), and (as of version 1.0.4 at least) supports + the RSA signatures used by most current control message senders. + + Alternately, you can install PGP from or one of + the international versions of it. Be warned, however, that the + licensing restrictions on PGP inside the United States are extremely + unclear; it's possible that if you are installing INN for a company in + the U.S., even if the news server is not part of the business of that + company, you would need to purchase a commercial license for PGP. For + an educational or non-profit organization, this shouldn't be a + problem. + + * If you want to use either the Python embedded hooks, you'll need to + have a suitable versions of Python installed. See doc/hook-python for + more information. + + * Many of INN's optional features require other packages (primarily + libraries) be installed. If you wish to use any of these optional + features, you will need to install those packages first. Here is a + table of configure options enabling optional features and the software + and versions you'll need: + + --with-perl Perl 5.004_03 or higher, 5.6.1+ recommended + --with-python Python 1.5.2 or higher + --with-berkeleydb BerkeleyDB 2.0 or higher, 4.2+ recommended + --with-openssl OpenSSL 0.9.6 or higher + --with-sasl SASL 2.x or higher + --with-kerberos MIT Kerberos v5 1.2.x or higher + + If any of these libraries (other than Perl or Python) are built shared + and installed in locations where your system doesn't search for shared + libraries by default, you may need to encode the paths to those shared + libraries in the INN binaries. For more information on shared library + paths, see: + + + + For most systems, setting the environment variable LD_RUN_PATH to a + colon-separated list of additional directories in which to look for + shared libraries before building INN will be sufficient. + +Unpacking the Distribution + + Released versions of INN are available from ftp.isc.org in /isc/inn. + New major releases will be announed on (see + README) when they're made. + + If you want more a more cutting-edge version, you can obtain current + snapshots from from ftp.isc.org in directory /isc/inn/snapshots. These + are snapshots of the INN CVS tree taken daily; there are two snapshots + made each night (one of the current development branch, and one of the + stable branch consisting of bug fixes to the previous major release). + They are stored in date format; in other words the snapshots from April + 6th, 2000, would be named inn-CURRENT-20000406.tar.gz and + inn-STABLE-20000406.tar.gz. Choose the newest file of whichever branch + you prefer. (Note that the downloading, configuring, and compiling + steps can be done while logged in as any user.) + + The distribution is in gzip compressed tar archive format. To extract + it, execute: + + gunzip -c | tar -xf - + + Extracting the source distribution will create a directory named + inn- or inn-- where the source resides. + +Installing INN + + Before beginning installation, you should make sure that there is a user + on your system named "news", and that this user's primary group is set + to a group called "news". You can change these with the + --with-news-user and --with-news-group options to configure (see below). + The home directory of this user should be set to the directory under + which you wish to install INN (/usr/local/news is the default and is a + good choice). INN will install itself as this user and group. You can + change these if you want, but these are the defaults and it's easier to + stick with them on a new installation. + + By default, INN sends reports to the user "usenet". This account isn't + used for any other purposes. You can change it with the + --with-news-master option to configure (see below). + + WARNING: By default, INN installs various configuration files as + group-writeable, and in general INN is not hardened from a security + standpoint against an attack by someone who is already in the news + group. In general, you should consider membership in the news group as + equivalent to access to the news account. You should not rely on being + able to keep anyone with access to the news GID from converting that + into access to the news UID. The recommended configuration is to have + the only member of the group "news" be the user "news". + + Installing INN so that all of its files are under a single directory + tree, rather than scattering binaries, libraries, and man pages + throughout the file system, is strongly recommended. It helps keep + everything involved in the operation of INN together as a unit and will + make the installation instructions easier to follow. + + As a side note, whenever doing anything with a running news server, + first log in as this user. That way, you can ensure that all files + created by any commands you run are created with the right ownership to + be readable by the server. Particularly avoid doing anything in the + news spool itself as root, and make sure you fix the ownership of any + created files if you have to. INN doesn't like files in the news spool + owned by a user other than the news user. However, since certain + binaries need to be setuid root, indiscriminate use of "chown news" is + not the solution. (If you don't like to log in to system accounts, + careful use of "chmod g+s" on directories and a umask of 002 or 007 may + suffice.) + + INN uses GNU autoconf and a generated configure script to make + configuration rather painless. Unless you have a rather abnormal setup, + configure should be able to completely configure INN for your system. + If you want to change the defaults, you can invoke the configure script + with one or more command line options. Type: + + ./configure --help + + for a full list of supported options. Some of the most commonly used + options are: + + --prefix=PATH + Sets the installation prefix for INN. The default is + /usr/local/news. All of INN's programs and support files will be + installed under this directory. This should match the home + directory of your news user (it will make installation and + maintenance easier). It is not recommended to set this to /usr; if + you decide to do that anyway, make sure to point INN's temporary + directory at a directory that isn't world-writeable (see + --with-tmp-dir below). + + --with-db-dir=PATH + Sets the prefix for INN database files. The default is PREFIX/db, + where PREFIX is /usr/local/news unless overridden with the option + above. The history and active files will be stored in this + directory, and writes to those files are an appreciable percentage + of INN's disk activity. The history file can also be quite large + (requiring up to 2 GB or more during nightly expire), so this is a + common portion of INN to move to a different file system. + + --with-spool-dir=PATH + Sets the prefix for the news spool (when using any storage method + other than CNFS) and the overview spool. The default is + PREFIX/spool. This is another common portion of INN to move to a + different file system (often /news). + + --with-tmp-dir=PATH + Sets the directory in which INN will create temporary files. This + should under no circumstances be the same as the system temporary + directory or otherwise be set to a world-writeable directory, since + INN doesn't take care to avoid symlink attacks and other security + problems possible with a world-writeable directory. This directory + should be reserved for the exclusive use of INN and only writeable + by the news user. Usage is generally light, so this is unlikely to + need a separate partition. + + It's also possible to set the paths for most other sections of the + INN installation independently; see "./configure --help" and look + for the --with-*-dir=PATH options. + + --enable-largefiles + Enables large file support. This is not enabled by default, even on + platforms that support it, because it changes the format of INN's + on-disk databases (making it difficult to upgrade an earlier INN + installation) and can significantly increase the size of some of the + history database files. Large file support is not necessary unless + your history database is so large that it exceeds 2 GB or you want + to use CNFS buffers larger than 2 GB. + + The history, tradindexed and buffindexed overview, CNFS, and timecaf + databases written by an INN built with this option are incompatible + with those written by an INN without this option. + + --enable-tagged-hash + Use tagged hash table for the history database. The tagged hash + format uses much less memory but is somewhat slower. This option is + recommended if you have less than 256 MB of RAM on your news server. + If you install INN without tagged hash (the default) and expire + takes an excessive amount of time, you should make sure the RAM in + your system satisfies the following formula: + + ram > 10 * tablesize + + ram: Amount of system RAM (in bytes) + tablesize: 3rd field on the 1st line of history.dir (bytes) + + If you don't have at least that much RAM, try rebuilding INN with + tagged hash enabled. + + NOTE: --enable-largefiles cannot be used with --enable-tagged-hash. + + --with-perl + Enables support for embedded Perl, allowing you to install filter + scripts written in Perl. Highly recommended, because many really + good spam filters are written in Perl. See doc/hook-perl for all + the details. + + Even if you do not use this option, INN still requires Perl as + mentioned above. + + --with-python + Enables support for Python, allowing you to install filter and + authentication scripts written in Python. You will need Python + 1.5.2 or later installed on your system to enable this option. See + doc/hook-python for all the details. Note that there is an + incompatibility between INN and Python 2.0 when Python is compiled + with cycle garbage collection; this problem was reported fixed in + Python 2.1a1. + + --with-innd-port=PORT + By default, inndstart(8) refuses to bind to any port under 1024 + other than 119 and 433 for security reasons (to prevent attacks on + rsh(1)-based commands and replacing standard system daemons). If + you want to run innd on a different port under 1024, you'll need to + tell configure what port you intend to use. (You'll also still need + to set the port number in inn.conf or give it to inndstart on the + command line.) + + --with-syslog-facility=FACILITY + Specifies the syslog facility that INN programs should log to. The + default is LOG_NEWS unless configure detects that your system + doesn't understand that facility, in which case it uses LOG_LOCAL1. + This flag overrides the automatic detection. Be sure to specify a + facility not used by anything else on your system (one of LOG_LOCAL0 + through LOG_LOCAL7, for example). + + --enable-libtool + INN has optional support for libtool to generate shared versions of + INN's libraries. This can significantly decrease the size of the + various binaries that come with a complete INN installation. You + can also choose to use libtool even when only building static + libraries; a libtool build may be somewhat more portable on weird + systems. libtool builds aren't the default because they take + somewhat longer. See "./configure --help" for the various available + options related to libtool builds. + + Please note that INN's shared library interface is not stable and + may change drastically in future releases. For this reason, it's + also not properly versioned and won't be until some degree of + stability is guaranteed, and the relevant header files are not + installed. Only INN should use INN's shared libraries, and you + should only use the shared libraries corresponding to the version of + INN that you're installing. + + Also, when updating an existing version of INN, INN tries to save + backup copies of all files so that you can revert to the previous + installed version. Unfortunately, when using shared libraries, this + confuses ldconfig on some systems (such as Linux) and the symbolic + links for the libraries may point to the .OLD versions. If this + happens, you can either fix the links by hand or remove the .OLD + versions and re-run ldconfig. + + --enable-uucp-rnews + If this option is given to configure, rnews will be installed setuid + news, owned by group uucp, and mode 4550. This will allow the UUCP + subsystem to run rnews to process UUCP batches of news articles. + Prior to INN 2.3, installing rnews setuid news was standard; since + most sites no longer use UUCP, it is no longer the default as of INN + 2.3 and must be requested at configure time. You probably don't + want to use this option unless your server accepts UUCP news + batches. + + --enable-setgid-inews + If this option is given to configure, inews will be installed setgid + news and world-executable so that non-privileged users on the news + server machine can use inews to post articles locally (somewhat + faster than opening a new network connection). For standalone news + servers, by far the most common configuration now, there's no need + to use this option; even if you have regular login accounts on your + news server, INN's inews can post fine via a network connection to + your running news server and doesn't need to use the local socket + (which is what setgid enables it to do). Installing inews setgid + was the default prior to INN 2.3. + + --with-berkeleydb=PATH + Enables support for Berkeley DB (2.x or 3.x), which means that it + will then be possible to use the ovdb overview method if you wish. + Enabling this configure option doesn't mean you'll be required to + use ovdb, but it does require that Berkeley DB be installed on your + system (including the header files, not just the runtime libraries). + If a path is given, it sets the installed directory of Berkeley DB + (configure will search for it in some standard locations, but if you + have it installed elsewhere, you may need this option). + + --with-openssl=PATH + Enables support for SSL for news reading, which means it will be + possible to have SSL or TLS encrypted NNTP connections between your + server and newsreaders. This option requires OpenSSL be installed + on your system (including the header files, not just the runtime + libraries). If a path is given, it sets the installed directory of + OpenSSL. After compiling and installing INN with this option, + you'll still need to make a certificate and private key to use SSL. + See below for details on how to do that. + + --enable-ipv6 + Enables support for IPv6 in innd, innfeed, nnrpd, and several of the + supporting programs. This option should be considered developmental + at present. For more information see doc/IPv6-info (and if you have + any particularly good or bad news to report, please let us know at + ). + + For the most common installation, a standalone news server, a suggested + set of options is: + + ./configure --with-perl + + provided that you have the necessary version of Perl installed. + (Compiling with an embedded Perl interpretor will allow you to use one + of the available excellent spam filters if you so choose.) + + If the configure program runs successfully, then you are ready to build + the distribution. From the root of the INN source tree, type: + + make + + At this point you can step away from the computer for a little while and + have a quick snack while INN compiles. On a decently fast system it + should only take five or ten minutes at the most to build. + + Once the build has completed successfully, you are ready to install INN + into its final home. Type: + + make install + + You will need to run this command as root so that INN can create the + directories it needs, change ownerships (if you did not compile as the + news user) and install a couple of setuid wrapper programs needed to + raise resource limits and allow innd to bind to ports under 1024. This + step will install INN under the install directory (/usr/local/news, + unless you specified something else to the configure script). + + If you are configuring SSL support for newsreaders, you must make a + certificate and private key at least once. Type: + + make cert + + as root in order to do this. + + You are now ready for the really fun part: configuring your copy of + INN! + +Choosing an Article Storage Format + + The first thing to decide is how INN should store articles on your + system. There are four different methods for you to choose from, each + of which has its own advantages and disadvantages. INN can support all + four at the same time, so you can store certain newsgroups in one method + and other newsgroups in another method. + + The supported storage formats are: + + tradspool + This is the storage method used by all versions of INN previous to + 2.0. Articles are stored as individual text files whose names are + the same as the article number. The articles are divided up into + directories based on the newsgroup name. For example, article 12345 + in news.software.nntp would be stored as news/software/nntp/12345 + relative to the root of the article spool. + + Advantages: Widely used and well-understood storage mechanism, can + read article spools written by older versions of INN, compatible + with all third-party INN add-ons, provides easy and direct access to + the articles stored on your server and makes writing programs that + fiddle with the news spool very easy, and gives you fine control + over article retention times. + + Disadvantages: Takes a very fast file system and I/O system to keep + up with current Usenet traffic volumes due to file system overhead. + Groups with heavy traffic tend to create a bottleneck because of + inefficiencies in storing large numbers of article files in a single + directory. Requires a nightly expire program to delete old articles + out of the news spool, a process that can slow down the server for + several hours or more. + + timehash + Articles are stored as individual files as in tradspool, but are + divided into directories based on the arrival time to ensure that no + single directory contains so many files as to cause a bottleneck. + + Advantages: Heavy traffic groups do not cause bottlenecks, and fine + control of article retention time is still possible. + + Disadvantages: The ability to easily find all articles in a given + newsgroup and manually fiddle with the article spool is lost, and + INN still suffers from speed degredation due to file system overhead + (creating and deleting individual files is a slow operation). + + timecaf + Similar to timehash, articles are stored by arrival time, but + instead of writing a separate file for each article, multiple + articles are put in the same file. + + Advantages: Roughly four times faster than timehash for article + writes, since much of the file system overhead is bypassed, while + still retaining the same fine control over article retention time. + + Disadvantages: Even worse than timehash, and similar to cnfs + (below), using this method means giving up all but the most careful + manually fiddling with your article spool. As one of the newer and + least widely used storage types, timecaf has not been as thoroughly + tested as the other methods. + + cnfs + CNFS stores articles sequentially in pre-configured buffer files. + When the end of the buffer is reached, new articles are stored from + the beginning of the buffer, overwriting older articles. + + Advantages: Blazingly fast because no file creations or deletions + are necessary to store an article. Unlike all other storage + methods, does not require manual article expiration; old articles + are deleted to make room for new ones when the buffers get too full. + Also, with CNFS your server will never throttle itself due to a full + spool disk, and groups are restricted to just the buffer files you + give them so that they can never use more than the amount of disk + space you allocate to them. + + Disadvantages: Article retention times are more difficult to + control because old articles are overwritten automatically. Attacks + on Usenet, such as flooding or massive amounts of spam, can result + in wanted articles expiring much faster than you intended (with no + warning). + + Some general recommendations: If you are installing a transit news + server (one that just accepts news and sends it out again to other + servers and doesn't support any readers), use CNFS exclusively and don't + worry about any of the other storage methods. Otherwise, put + high-volume groups and groups whose articles you don't need to keep + around very long (binaries groups, *.jobs*, news.lists.filters, etc.) in + CNFS buffers, and use timehash, timecaf, or tradspool (if you have a + fast I/O subsystem or need to be able to go through the spool manually) + for everything else. You'll probably find it most convenient to keep + special hierarchies like local hierarchies and hierarchies that should + never expire in tradspool. + + If your news server will be supporting readers, you'll also need to + choose an overview storage mechanism (by setting *ovmethod* in + inn.conf). There are three overview mechanisms to choose from: + tradindexed, buffindexed, and ovdb. tradindexed is very fast for + readers, but it has to update two files for each incoming article and + can be quite slow to write. buffindexed can keep up with a large feed + more easily, since it uses large buffers to store all overview + information, but it's somewhat slower for readers (although not as slow + as the unified overview in INN 2.2). ovdb stores overview data in a + Berkeley DB database; it's fast and very robust, but requires more disk + space. See the ovdb(5) man page for more information on it. + + Note that ovdb has not been as widely tested as the other overview + mechanisms and should be considered experimental. tradindexed is the + best tested and most widely used of the overview implementations. + + If buffindexed is chosen, you will need to create the buffers for it to + use (very similar to creating CNFS buffers) and list the available + buffers in buffindexed.conf. See buffindexed.conf(5) for more + information. + +Configuring INN + + All documentation from this point on assumes that you have set up the + news user on your system as suggested in "Installing INN" so that the + root of your INN installation is ~news/. If you've moved things around + by using options with "configure", you'll need to adjust the + instructions to account for that. + + All of INN's configuration files are located in ~news/etc. Unless noted + otherwise, any files referred to below are in this directory. When you + first install INN, a sample of each file (containing lots of comments) + is installed in ~news/etc; refer to these for concrete examples of + everything discussed in this section. + + All of INN's configuration files, all of the programs that come with it, + and some of its library routines have documentation in the form of man + pages. These man pages were installed in ~news/man as part of the INN + installation process and are the most complete reference to how INN + works. You're strongly encouraged to refer to the man pages frequently + while configuring INN, and for quick reference afterwards. Any detailed + questions about individual configuration files or the behavior of + specific programs should be answered in them. You may want to add + ~news/man to your MANPATH environment variable; otherwise, you may have + to use a command like: + + man -M ~news/man inn.conf + + to see the inn.conf(5) man page (for example). + + Before we begin, it is worth mentioning the wildmat pattern matching + syntax used in many configuration files. These are simple wildcard + matches using the asterisk ("*") as the wildcard character, much like + the simple wildcard expansion used by Unix shells. + + In many cases, wildmat patterns can be specified in a comma-separated + list to indicate a list of newsgroups. When used in this fashion, each + pattern is checked in turn to see if it matches, and the last pattern in + the line that matches the group name is used. Patterns beginning with + "!" mean to exclude groups matching that pattern. For example: + + *, !comp.*, comp.os.* + + In this case, we're saying we match everything ("*"), except that we + don't match anything under comp ("!comp.*"), unless it is actually under + the comp.os hierarchy ("comp.os.*"). This is because non-comp groups + will match only the first pattern (so we want them), comp.os groups will + match all three patterns (so we want them too, because the third pattern + counts in this case), and all other comp groups will match the first and + second patterns and will be excluded by the second pattern. + + Some uses of wildmat patterns also support "poison" patterns (patterns + starting with "@"). These patterns behave just like "!" patterns when + checked against a single newsgroup name. Where they become special is + for articles crossposted to multiple newsgroups; normally, such an + article will be considered to match a pattern if any of the newsgroups + it is posted to matches the pattern. If any newsgroup the article is + posted to matches an expression beginning with "@", however, that + article will not match the pattern even if other newsgroups to which it + was posted match other expressions. + + See uwildmat(3) for full details on wildmat patterns. + + In all INN configuration files, blank lines and lines beginning with a + "#" symbol are considered comments and are ignored. Be careful, not all + files permit comments to begin in the middle of the line. + + inn.conf + + The first, and most important file is inn.conf. This file is organized + as a series of parameter-value pairs, one per line. The parameter is + first, followed by a colon and one or more whitespace characters, and + then the value itself. For some parameters the value is a string or a + number; for others it is true or false. (True values can be written as + "yes", "true", or "on", whichever you prefer. Similarly, false values + can be written as "no", "false", or "off".) + + inn.conf contains dozens of changeable parameters (see inn.conf(5) for + full details), but only a few really need to be edited during normal + operation: + + allownewnews + If set to true then INN will support the NEWNEWS command for news + readers. While this can be an expensive operation, its speed has + been improved considerably as of INN 2.3 and it's probably safe to + turn on without risking excessive server load. The default is true. + (Note that the *access:* setting in readers.conf overrides this + value; see readers.conf(5) for more details.) + + complaints + Used to set the value of the X-Complaints-To: header, which is added + to all articles posted locally. The usual value would be something + like "abuse@example.com" or "postmaster@example.com". If not + specified, the newsmaster email address will be used. + + hiscachesize + The amount of memory (in kilobytes) to allocate for a cache of + recently used history file entries. Setting this to 0 disables + history caching. History caching can greatly increase the number of + articles per second that your server is capable of processing. A + value of 256 is a good default choice. + + logipaddr + If set to true (the default), INN will log the IP address (or + hostname, if the host is listed in incoming.conf with a hostname) of + the remote host from which it received an article. If set to false, + the trailing Path: header entry is logged instead. If you are using + controlchan (see below) and need to process ihave/sendme control + messages (this is very, very unlikely, so if you don't know what + this means, don't worry about it), make sure you set this to false, + since controlchan needs a site name, not an IP address. + + organization + Set this to the name of your organization as you want it to appear + in the Organization: header of all articles posted locally and not + already containing that header. This will be overridden by the + value of the ORGANIZATION environment variable (if it exists). If + neither this parameter nor the environment variable or set, no + Organization: header will be added to posts which lack one. + + pathhost + This is the name of your news server as you wish it to appear in the + Path: header of all postings which travel through your server (this + includes local posts and incoming posts that you forward out to + other sites). If this parameter is unspecified, the fully-qualified + domain name (FQDN) of the machine will be used instead. Please use + the FQDN of your server or an alias for your server unless you have + a very good reason not to; a future version of the news RFCs may + require this. + + rlimitnofile + If set to a non-negative value (the default is -1), INN (both innd + and innfeed) will try to raise the maximum number of open file + descriptors to this value when it starts. This may be needed if you + have lots of incoming and outgoing feeds. Note that the maximum + value for this setting is very operating-system-dependent, and you + may have to reconfigure your system (possibly even recompile your + kernel) to increase it. See "File Descriptor Limits" for complete + details. + + There are tons of other possible settings; you may want to read through + inn.conf(5) to get a feel for your options. Don't worry if you don't + understand the purpose of most of them right now. Some of the settings + are only needed for very obscure things, and with more experience + running your news server the rest will make more sense. + + newsfeeds + + newsfeeds determines how incoming articles are redistributed to your + peers and to other INN processes. newsfeeds is very versatile and + contains dozens of options; we will touch on just the basics here. The + manpage contains more detailed information. + + newsfeeds is organized as a series of feed entries. Each feed entry is + composed of four fields separated by colons. Entries may span multiple + lines by using a backslash ("\") to indicate that the next line is a + continuation of the current line. (Note that comments don't interact + with backslashes in the way you might expect. A commented-out line + ending in a backslash will still be considered continued on the next + line, possibly resulting in more commented out than you intended or + bizarre syntax errors. In general, it's best to avoid commenting out + lines in the middle of continuation lines.) + + The first field in an entry is the name of the feed. It must be unique, + and for feeds to other news servers it is usually set to the actual + hostname of the remote server (this makes things easier). The name can + optionally be followed by a slash and a comma-separated exclude list. + If the feed name or any of the names in the exclude list appear in the + Path line of an article, then that article will not be forwarded to the + feed as it is assumed that it has passed through that site once already. + The exclude list is useful when a news server's hostname is not the same + as what it puts in the Path header of its articles, or when you don't + want a feed to receive articles from a certain source. + + The second field specifies a set of desired newsgroups and distribution + lists, given as newsgroup-pattern/distribution-list. The distribution + list is not described here; see newsfeeds(5) for information (it's not + used that frequently in practice). The newsgroup pattern is a + wildmat-style pattern list as described above (supporting "@"). + + The third field is a comma-separated list of flags that determine both + the type of feed entry and sets certain parameters for the entry. See + newsfeeds(5) for information on the flag settings; you can do a + surprising amount with them. The three most common patterns, and the + ones mainly used for outgoing news feeds to other sites, are "Tf,Wnm" + (to write out a batch file of articles to be sent, suitable for + processing by nntpsend and innxmit), "Tm" (to send the article to a + funnel feed, used with innfeed), and "Tc,Wnm*" (to collect a funnel feed + and send it via a channel feed to an external program, used to send + articles to innfeed). + + The fourth field is a multi-purpose parameter whose meaning depends on + the settings of the flags in the third field. To get a feel for it + using the examples above, for file feeds ("Tf") it's the name of the + file to write, for funnel feeds ("Tm") it's the name of the feed entry + to funnel into, and for channel feeds ("Tc") it's the name of the + program to run and feed references to articles. + + Now that you have a rough idea of the file layout, we'll begin to add + the actual feed entries. First, we'll set up the special ME entry. + This entry is required and serves two purposes: the newsgroup pattern + specified here is prepended to the newsgroup list of all other feeds, + and the distribution pattern for this entry determines what + distributions (from the Distribution: header of incoming articles) are + accepted from remote sites by your server. The example in the sample + newsfeeds file is a good starting point. If you are going to create a + local hierarchy that should not be distributed off of your system, it + may be useful to exclude it from the default subscription pattern, but + default subscription patterns are somewhat difficult to use right so you + may want to just exclude it specifically from every feed instead. + + The ME entry tends to confuse a lot of people, so this point is worth + repeating: the newsgroup patterns set the default subscription for + *outgoing* feeds, and the distribution patterns set the acceptable + Distribution: header entries for *incoming* articles. This is confusing + enough that it may change in later versions of INN. + + There are two basic ways to feed articles to remote sites. The most + common for large sites and particularly for transit news servers is + innfeed(8), which sends articles to remote sites in real time (the + article goes out to all peers that are supposed to receive it + immediately after your server accepts it). For smaller sites, + particularly sites where the only outgoing messages will be locally + posted articles, it's more common to batch outgoing articles and send + them every ten minutes or so from cron using nntpsend(8) and innxmit(8). + Batching gives you more control and tends to be extremely stable and + reliable, but it's much slower and can't handle high volume very well. + + Batching outgoing posts is easy to set up; for each peer, add an entry + to newsfeeds that looks like: + + remote.example.com/news.example.com\ + :\ + :Tf,Wnm: + + where is the wildmat pattern for the newsgroups that site + wants. In this example, the actual name of the remote site is + "remote.example.com", but it puts "news.example.com" in the Path: + header. If the remote site puts its actual hostname in the Path: + header, you won't need the "/news.example.com" part. + + This entry will cause innd to write out a file in ~news/spool/outgoing + named remote.example.com and containing the Message-ID and storage token + of each message to send to that site. (The storage token is INN's + internal pointer to where an article is stored; to retrieve an article + given its storage token, use sm(8)). innxmit knows how to read files of + this format and send those articles to the remote site. For information + on setting it up to run periodically, see "Setting Up the Cron Jobs" + below. You will also need to set up a config file for nntpsend; see the + man page for nntpsend.ctl(5) for more information. + + If instead you want to use innfeed to send outgoing messages + (recommended for sites with more than a couple of peers), you need some + slightly more complex magic. You still set up a separate entry for each + of your peers, but rather than writing out batch files, they all + "funnel" into a special innfeed entry. That special entry collects all + of the separate funnel feeds and sends the data through a special sort + of feed to an external program (innfeed in this case); this is a + "channel" feed. + + First, the special channel feed entry for innfeed that will collect all + the funnel feeds: + + innfeed!\ + :!*\ + :Tc,Wnm*:/usr/local/news/bin/startinnfeed -y + + (adjust the path to startinnfeed(1) if you installed it elsewhere). + Note that we don't feed this entry any articles directly (its newsgroup + pattern is "!*"). Note also that the name of this entry ends in an + exclamation point. This is a standard convention for all special feeds; + since the delimiter for the Path: header is "!", no site name containing + that character can ever match the name of a real site. + + Next, set up entries for each remote site to which you will be feeding + articles. All of these entries should be of the form: + + remote.example.com/news.example.com\ + :\ + :Tm:innfeed! + + specifying that they funnel into the "innfeed!" feed. As in the + previous example for batching, "remote.example.com" is the actual name + of the remote peer, "news.example.com" is what it puts in the Path: + header (if different than the actual name of the server), and + is the wildmat pattern of newsgroups to be sent. + + As an alternative to NNTP, INN may also feed news out to an IMAP server, + by using imapfeed(8), which is almost identical to innfeed. The + startinnfeed process can be told to start imapfeed instead of innfeed. + The feed entry for this is as follows: + + imapfeed!\ + :!*\ + :Tc,Wnm*,S16384:/usr/local/news/bin/startinnfeed imapfeed + + And set up entries for each remote site like: + + remote.example.com/news.example.com\ + :\ + :Tm:imapfeed! + + For more information on imapfeed, look at the innfeed/imap_connection.c. + For more information on IMAP in general, see RFC 2060. + + Finally, there is a special entry for controlchan(8), which processes + newsgroup control messages, that should always be in newsfeeds unless + you never want to honor any control messages. This entry should look + like: + + controlchan!\ + :!*,control,control.*,!control.cancel\ + :Tc,Wnsm:/usr/local/news/bin/controlchan + + (modified for the actual path to controlchan if you put it somewhere + else). See "Processing Control Messages" for more details. + + For those of you upgrading from earlier versions of INN, note that the + functionality of overchan(8) and crosspost is now incorporated into INN + and neither of those programs is necessary. Unfortunately, crosspost + currently will not work even with the tradspool storage method. You can + still use overchan if you make sure to set *useoverchan* to true in + inn.conf so that innd doesn't write overview data itself, but be + careful: innd may accept articles faster than overchan can process the + data. + + incoming.conf + + incoming.conf file specifies which machines are permitted to connect to + your host and feed it articles. Remote servers you peer with should be + listed here. Connections from hosts not listed in this file will (if + you don't allow readers) be rejected or (if you allow readers) be handed + off to nnrpd and checked against the access restrictions in + readers.conf. + + Start with the sample incoming.conf and, for each remote peer, add an + entry like: + + peer remote.example.com { } + + This uses the default parameters for that feed and allows incoming + connections from a machine named "remote.example.com". If that peer + could be connecting from several different machines, instead use an + entry like: + + peer remote.example.com { + hostname: "remote.example.com, news.example.com" + } + + This will allow either "remote.example.com" or "news.example.com" to + feed articles to you. (In general, you should add new peer lines for + each separate remote site you peer with, and list multiple host names + using the *hostname* key if one particular remote site uses multiple + servers.) + + You can restrict the newsgroups a remote site is allowed to send you, + using the same sort of pattern that newsfeeds(5) uses. For example, if + you want to prevent "example.com" hosts from sending you any articles in + the "local.*" hierarchy (even if they're crossposted to other groups), + change the above to: + + peer remote.example.com { + patterns: "*, @local.*" + hostname: "remote.example.com, news.example.com" + } + + Note, however, that restricting what a remote side can send you will + *not* reduce your incoming bandwidth usage. The remote site will still + send you the entire article; INN will just reject it rather than saving + it to disk. To reduce bandwidth, you have to contact your peers and ask + them not to send you the traffic you don't want. + + There are various other things you can set, including the maximum number + of connections the remote host will be allowed. See incoming.conf(5) + for all the details. + + Note for those familiar with older versions of INN: this file replaces + the old hosts.nntp configuration file. + + cycbuff.conf + + cycbuff.conf is only required if CNFS is used. If you aren't using + CNFS, skip this section. + + CNFS stores articles in logical objects called *metacycbuffs*. Each + metacycbuff is in turn composed of one or more physical buffers called + *cycbuffs*. As articles are written to the metacycbuff, each article is + written to the next cycbuff in the list in a round-robin fashion (unless + "sequential" mode is specified, in which case each cycbuff is filled + before moving on to the next). This is so that you can distribute the + individual cycbuffs across multiple physical disks and balance the load + between them. + + There are two ways to create your cycbuffs: + + 1. Use a block device directly. This will probably give you the most + speed since it avoids the file system overhead of large files, but + it requires your OS support mmap(2) on a block device. Solaris + supports this, as do late Linux 2.4 kernels. FreeBSD does not at + last report. Also on many PC-based Unixes it is difficult to create + more than eight partitions, which may limit your options. + + 2. Use a real file on a filesystem. This will probably be a bit slower + than using a block device directly, but it should work on any Unix + system. + + If you're having doubts, use option #2; it's easier to set up and should + work regardless of your operating system. + + Now you need to decide on the sizes of your cycbuffs and metacycbuffs. + You'll probably want to separate the heavy-traffic groups + ("alt.binaries.*" and maybe a few other things like "*.jobs*" and + "news.lists.filters") into their own metacycbuff so that they don't + overrun the server and push out articles on the more useful groups. If + you have any local groups that you want to stay around for a while then + you should put them in their own metacycbuff as well, so that they don't + get pushed out by other traffic. (Or you might store them in one of the + other storage methods, such as tradspool.) + + For each metacycbuff, you now need to determine how many cycbuffs will + make up the metacycbuff, the size of those cycbuffs, and where they will + be stored. Some OSes do not support files larger than 2 GB, which will + limit the size you can make a single cycbuff, but you can still combine + many cycbuffs into each metacycbuff. Older versions of Linux are known + to have this limitation; FreeBSD does not. Some OSes that support large + files don't support direct access to block devices for large partitions + (Solaris prior to Solaris 7, or not running in 64-bit mode, is in this + category); on those OSes, if you want cycbuffs over 2 GB, you'll have to + use regular files. If in doubt, keep your cycbuffs smaller than 2 GB. + Also, when laying out your cycbuffs, you will want to try to arrange + them across as many physical disks as possible (or use a striped disk + array and put them all on that). + + In order to use any cycbuff larger than 2 GB, you need to build INN with + the --enable-largefiles option. See "Installing INN" for more + information and some caveats. + + For each cycbuff you will be creating, add a line to cycbuff.conf like + the following: + + cycbuff:NAME:/path/to/buffer:SIZE + + NAME must be unique and must be at most seven characters long. + Something simple like "BUFF00", "BUFF01", etc. is a decent choice, or + you may want to use something that includes the SCSI target and slice + number of the partition. SIZE is the buffer size in kilobytes (if + you're trying to stay under 2 GB, keep your sizes below 2097152). + + Now, you need to tell INN how to group your cycbuffs into metacycbuffs. + This is similar to creating cycbuff entries: + + metacycbuff:BUFFNAME:CYCBUFF,CYCBUFF,CYCBUFF + + BUFFNAME is the name of the metacycbuff and must be unique and at most + eight characters long. These should be a bit more meaningful than the + cycbuff names since they will be used in other config files as well. + Try to name them after what will be stored in them; for example, if this + metacycbuff will hold alt.binaries postings, "BINARIES" would be a good + choice. The last part of the entry is a comma-separated list of all of + the cycbuffs that should be used to build this metacycbuff. Each + cycbuff should only appear in one metacycbuff line, and all metacycbuff + lines must occur after all cycbuff lines in the file. + + If you want INN to fill each cycbuff before moving on to the next one + rather than writing to them round-robin, add ":SEQUENTIAL" to the end of + the metacycbuff line. This may give noticeably better performance when + using multiple cycbuffs on the same spindle (such as partitions or + slices of a larger disk), but will probably give worse performance if + your cycbuffs are spread out across a lot of spindles. + + By default, CNFS data is flushed to disk every 25 articles. If you're + running a small server with a light article load, this could mean losing + quite a few articles in a crash. You can change this interval by adding + a cycbuffupdate line to your cycbuff.conf file; see cycbuff.conf(5) for + more details. + + Finally, you have to create the cycbuffs. See "Creating the Article + Spool" for more information on how to do that. + + storage.conf + + storage.conf determines where incoming articles will be stored (what + storage method, and in the case of CNFS, what metacycbuff). Each entry + in the file defines a storage class for articles. The first matching + storage class is used to store the article; if no storage class matches, + INN will reject that article. (This is almost never what you want, so + make sure this file ends in a catch-all entry that will match + everything.) + + A storage class definition looks like this: + + method { + newsgroups: + class: + size: [,] + expires: [,] + options: + } + + is the name of the storage method to use to store articles + in this class ("cnfs", "timehash", "timecaf", "tradspool", or the + special method "trash" that accepts the article and throws it away). + + The first parameter is a wildmat pattern in the same format used by the + newsfeeds(5) file, and determines what newsgroups are accepted by this + storage class. + + The second parameter is a unique number identifying this storage class + and should be between 0 and 255. It can be used to control article + expiration, and for timehash and timecaf will set the top-level + directory in which articles accepted by this storage class are stored. + The easiest way to deal with this parameter is to just number all + storage classes in storage.conf sequentially. The assignment of a + particular number to a storage class is arbitrary but *permanent* (since + it is used in storage tokens). + + The third parameter can be used to accept only articles in a certain + size range into this storage class. A of 0 (or a missing + ) means no upper limit (and of course a of 0 would + mean no lower limit, because all articles are more than zero bytes + long). If you don't want to limit the size of articles accepted by this + storage class, leave this parameter out entirely. + + The fourth parameter you probably don't want to use; it lets you assign + storage classes based on the Expires: header of incoming articles. The + exact details are in storage.conf(5). It's very easy to use this + parameter incorrectly; leave it out entirely unless you've read the man + page and know what you're doing. + + The fifth parameter is the options parameter. Currently only CNFS uses + this field; it should contain the name of the metacycbuff used to store + articles in this storage class. + + If you're using CNFS exclusively, just create one storage class for each + metacycbuff that you have defined in cycbuff.conf and set the newsgroups + pattern according to what newsgroups should be stored in that buffer. + + If you're using timehash or timecaf, the storage class IDs are used to + store articles in separate directory trees, which you can take advantage + of to put particular storage classes on different disks. Also, + currently storage class is the only way to specify expiration time, so + you will need to divide up your newsgroups based on how long you want to + retain articles in those groups and create a storage class for each such + collection of newsgroups. Make note of the storage class IDs you assign + as they will be needed when you edit expire.ctl a bit later. + + expire.ctl + + expire.ctl sets the expiration policy for articles stored on the server. + Be careful, since the default configuration will expire most articles + after 10 days; in most circumstances this deletion is *permanent*, so + read this whole section carefully if you want to keep local hierarchies + forever. (See archive(8) for a way to automate backups of important + articles.) + + Only one entry is required for all storage classes; it looks like: + + /remember/:10 + + This entry says how long to keep the Message-IDs for articles that have + already expired in the history file so that the server doesn't accept + them again. Occasionally, fairly old articles will get regurgitated + somewhere and offered to you again, so even after you've expired + articles from your spool, you want to keep them around in your history + file for a little while to ensure you don't get duplicates. + + INN will reject any articles more than a certain number of days old (the + *artcutoff* parameter in inn.conf, defaulting to 10); the number on the + "/remember/" line should match that. + + CNFS makes no further use of expire.ctl, since articles stored in CNFS + buffers expire automatically when the buffer runs out of free space (but + see the "-N" option in expireover(8) if you really want to expire them + earlier). For other storage methods, there are two different syntaxes + of this file, depending on *groupbaseexpiry* in inn.conf. If it is set + to false, expire.ctl takes entries of the form: + + ::: + + is the number assigned to a storage class in + storage.conf. is the number of days to keep normal articles + in that storage class (decimal values are allowed). For articles that + don't have an Expires: header, those are the only two values that + matter. For articles with an Expires: header, the other two values come + into play; the date given in the Expires: header of an article will be + honored, subject to the contraints set by and . All + articles in this storage class will be kept for at least days, + regardless of their Expires: headers, and all articles in this storage + class will be expired after days, even if their Expires: headers + specify a longer life. + + All three of these fields can also contain the special keyword "never". + If is "never", only articles with explicit Expires: headers + will ever be expired. If is "never", articles with explicit + Expires: headers will be kept forever. Setting to "never" says + to honor Expires: headers even if they specify dates far into the + future. (Note that if is set to "never", all articles with + Expires: headers are kept forever and the value of is not used.) + + If the value of "groupbaseexpiry" is true, expire.ctl takes entries of + the form: + + :::: + + is a wildmat expression ("!" and "@" not permitted, and only a + single expression, not a comma-separated set of them). Each expiration + line applies to groups matching the wildmat expression. is "M" + for moderated groups, "U" for unmoderated groups, and "A" for groups + with any moderation status; the line only matches groups with the + indicated expiration status. All of the other fields have the same + meaning as above. + + readers.conf + + Provided that *noreader* is set to false in inn.conf, any connection + from a host that doesn't match an entry in incoming.conf (as well as any + connection from a host that does match such an entry, but has issued a + MODE READER command) will be handed off to nnrpd(8), the part of INN + that supports newsreading clients. nnrpd uses readers.conf to determine + whether a given connection is allowed to read news, and if so what + newsgroups the client can read and post to. + + There are a variety of fairly complicated things that one can do with + readers.conf, things like run external authentication programs that can + query RADIUS servers. See readers.conf(5) and the example file for all + the gory details. Here's an example of probably the simplest reasonable + configuration, one that only allows clients in the example.com domain to + read from the server and allows any host in that domain to read and post + to all groups: + + auth "example.com" { + hosts: "example.com, *.example.com" + default: "" + default-domain: "example.com" + } + + access "all" { + users: "*@example.com" + newsgroups: "*" + } + + If you're running a server for one particular domain, want to allow all + hosts within that domain to read and post to any group on the server, + and want to deny access to anyone outside that domain, just use the + above and change "example.com" in the above to your domain and you're + all set. Lots of examples of more complicated things are in the sample + file. + +Creating the Article Spool (CNFS only) + + If you are using actual files as your CNFS buffers, you will need to + pre-create those files, ensuring they're the right size. The easiest + way to do this is with dd. For each cycbuff in cycbuff.conf, create the + buffer with the following commands (as the news user): + + dd if=/dev/zero of=/path/to/buffer bs=1k count=BUFFERSIZE + chmod 664 /path/to/buffer + + Substitute the correct path to the buffer and the size of the buffer as + specified in cycbuff.conf. This will create a zero-filled file of the + correct size; it may take a while, so be prepared to wait. + + Here's a command that will print out the dd(1) commands that you should + run: + + awk -F: \ + '/^cy/ { printf "dd if=/dev/zero of=%s bs=1k count=%s\n", $3, $4 }' \ + ~news/etc/cycbuff.conf + + If you are using block devices, you don't technically have to do + anything at all (since INN is capable of using the devices in /dev), but + you probably want to create special device files for those devices + somewhere for INN's private use. It s more convenient to keep all of + INN's stuff together, but more importantly, the device files used by INN + really should be owned by the news user and group, and you may not want + to do that with the files in /dev. + + To create the device files for INN, use mknod(8) with a type of "b", + getting the major and minor device numbers from the existing devices in + /dev. There's a small shell script in cycbuff.conf(5) that may help + with this. Make sure to create the device files in the location INN + expects them (specified in cycbuff.conf). + + Solaris users please note: on Solaris, do not use block devices that + include the first cylinder of the disk. Solaris doesn't protect the + superblock from being overwritten by an application writing to block + devices and includes it in the first cylinder of the disk, so unless you + use a slice that starts with cylinder 1 instead of 0, INN will + invalidate the partition table when it tries to initialize the cycbuff + and all further accesses will fail until you repartition. + +Creating the Database Files + + At this point, you need to set up the news database directory + (~news/db). This directory will hold the active(5) file (the list of + newsgroups you carry), the active.times(5) file (the creator and + creation time of newsgroups created since the server was initialized), + the newsgroups(5) file (descriptions for all the newsgroups you carry), + and the history(5) file (a record of every article the server currently + has or has seen in the past few days, used to decide whether to accept + or refuse new incoming messages). + + Before starting to work on this, make sure you're logged on as the news + user, since all of these files need to be owned by that user. This is a + good policy to always follow; if you are doing any maintenance work on + your news server, log on as the news user. Don't do maintenance work as + root. Also make sure that ~news/bin is in the default path of the news + user (and while you're at it, make sure ~news/man is in the default + MANPATH) so that you can run INN maintenance commands without having to + type the full path. + + If you already have a server set up (if you're upgrading, or setting up + a new server based on an existing server), copy active and newsgroups + from that server into ~news/db. Otherwise, you'll need to figure out + what newsgroups you want to carry and create new active and newsgroups + files for them. If you plan to carry a full feed, or something close to + that, go to and download active + and newsgroups from there; that will start you off with reasonably + complete files. If you plan to only carry a small set of groups, the + default minimal active file installed by INN is a good place to start; + you can create additional groups after the server is running by using + "ctlinnd newgroup". (Another option is to use actsync(8) to synchronize + your newsgroup list to that of another server.) + + "control" and "junk" must exist as newsgroups in your active file for + INN to start, and creating pseudogroups for the major types of control + messages is strongly encouraged for all servers that aren't standalone. + If you don't want these groups to be visible to clients, do *not* delete + them; simply hide them in readers.conf. "to" must also exist as a + newsgroup if you have mergetogroups set in inn.conf. + + Next, you need to create an empty history database. To do this, type: + + cd ~news/db + touch history + makedbz -i + + When it finishes, rename the files it created to remove the ".n" in the + file names and then make sure the file permissions are correct on all + the files you've just created: + + chmod 644 * + + Your news database files are now ready to go. + +Configuring syslog + + While some logs are handled internally, INN also logs a wide variety of + information via syslog. INN's nightly report programs know how to roll + and summarize those syslog log files, but when you first install INN you + need to set them up. + + If your system understands the "news" syslog facility, INN will use it; + otherwise, it will log to "local1". Nearly every modern system has a + "news" syslog facility so you can safely assume that yours does, but if + in doubt take a look at the output from running "configure". You should + see a line that looks like: + + checking log level for news... LOG_NEWS + + If that says LOG_LOCAL1 instead, change the below instructions to use + "local1" instead of "news". + + Edit /etc/syslog.conf on your system and add lines that look like the + following: + + news.crit /usr/local/news/log/news.crit + news.err /usr/local/news/log/news.err + news.notice /usr/local/news/log/news.notice + + (Change the path names as necessary if you installed INN in a different + location than /usr/local/news.) These lines *must* be tab-delimited, so + don't copy and paste from these instructions. Type it in by hand and + make sure you use a tab, or you'll get mysterious failures. You'll also + want to make sure that news log messages don't fill your other log files + (INN generates a lot of log traffic); so for every entry in + /etc/syslog.conf that starts with "*", add ";news.none" to the end of + the first column. For example, if you have a line like: + + *.err /dev/console + + change it to: + + *.err;news.none /dev/console + + (You can choose not to do this for the higher priority log messages, if + you want to make sure they go to your normal high-priority log files as + well as INN's. Don't bother with anything lower priority than "crit", + though. "news.err" isn't interesting enough to want to see all the + time.) Now, make sure that the news log files exist; syslog generally + won't create files automatically. Enter the following commands: + + touch /usr/local/news/log/news.crit + touch /usr/local/news/log/news.err + touch /usr/local/news/log/news.notice + chown news /usr/local/news/log/news.* + chgrp news /usr/local/news/log/news.* + + (again adjusting the paths if necessary for your installation). + Finally, send a HUP signal to syslogd to make it re-read its + configuration file. + +Setting Up the Cron Jobs + + INN requires a special cron job to be set up on your system to run + news.daily(8) which performs daily server maintenance tasks such as + article expiration and the processing and rotation of the server logs. + Since it will slow the server down while it is running, it should be run + during periods of low server usage, such as in the middle of the night. + To run it at 3am, for example, add the following entry to the news + user's crontab file: + + 0 3 * * * /usr/local/news/bin/news.daily expireover lowmark + + or, if your system does not have per-user crontabs, put the following + line into your system crontab instead: + + 0 3 * * * su -c "/usr/local/news/bin/news.daily expireover lowmark" news + + If you're using any non-CNFS storage methods, add "delayrm" to the above + option list for news.daily. + + The news user obviously must be able to run cron jobs. On Solaris, this + means that it must have a valid /etc/shadow entry and must not be locked + (although it may be a non-login account). There may be similar + restrictions with other operating systems. + + If you use the batching method to send news, also set up a cron job to + run nntpsend(8) every ten minutes. nntpsend will run innxmit for all + non-empty pending batch files to send pending news to your peers. That + cron entry should look something like: + + 0,10,20,30,40,50 * * * * /usr/local/news/bin/nntpsend + + The pathnames and user ID used above are the installation defaults; + change them to match your installation if you used something other than + the defaults. + + The parameters passed to news.daily in the above example are the most + common (and usually the most efficient) ones to use. More information + on what these parameters do can be found in the news.daily(8) man page. + +File Descriptor Limits + + INN likes to use a lot of file descriptors, particularly if you have a + lot of peers. Depending on what your system defaults are, you may need + to make sure the default limit is increased for INN (particularly for + innd and innfeed). This is vital on Solaris, which defaults (at least + as of 2.6) to an absurdly low limit of 64 file descriptors per process. + + One way to increase the number of file descriptors is to set + *rlimitnofile* in inn.conf to a higher value. This will cause both + startinnfeed and inndstart (the setuid root wrapper scripts that start + innfeed and innd, respectively) to increase the file descriptor limits + before they run the regular INN programs. Note, however, that INN won't + be able to increase the limits above the hard limits set by your + operating system; on some systems, that hard limit is normally 256 file + descriptors (Linux, for example). On others, like Solaris, it's 1024. + Increasing the limit beyond that value may require serious system + configuration work. (On some operating systems, it requires patching + and recompiling the kernel. On Solaris it can be changed in + /etc/system, but for 2.6 or earlier the limit cannot be increased beyond + 1024 without breaking select(2) and thereby breaking all of INN. For + current versions of Linux, you may be able to change the maximum by + writing to /proc/sys/fs/file-max.) + + 256 file descriptors will probably be enough for all but the largest + sites. There is no harm in setting the limits higher than you actually + need (provided they're set to something lower than or equal to your + system hard limit). 256 is therefore a reasonable value to try. + + If you're installing INN on a Solaris system, particularly if you're + installing it on a dedicated news server machine, it may be easier to + just increase the default file descriptor limit across the board for all + processes. You can do that by putting the line: + + set rlim_fd_cur = 256 + + in /etc/system and rebooting. You can increase it all the way to 1024 + (and may need to if you have a particularly large site), but that can + cause RPC and some stdio applications to break. It therefore probably + isn't a good idea on a machine that isn't dedicated to INN. + +Starting and Stopping the System + + INN is started via the shell script rc.news. This must be run as the + news user and not as root. To start INN on system boot, you therefore + want to put something like: + + su - news -c /usr/local/news/bin/rc.news + + in the system boot scripts. If innd is stopped or killed, you can + restart it by running rc.news by hand as the news user. + + The rc.news script may also be used to shut down INN, with the "stop" + option: + + su - news -c '/usr/local/news/bin/rc.news stop' + + In the contrib directory of this source tree is a sample init script for + people using System V-style init.d directories. + +Processing Newsgroup Control Messages + + Control messages are specially-formatted messages that tell news servers + to take various actions. Cancels (commands to delete messages) are + handled internally by INN, and all other control messages are processed + by controlchan. controlchan should be run out of newsfeeds if you want + your news server to process any control messages; see "Configuring INN" + for specific instructions. + + The actions of controlchan are determined by control.ctl, which lists + who can perform what actions. The primary control messages to be + concerned with are "newgroup" (to create a newsgroup), "rmgroup" (to + remove a newsgroup), and "checkgroups" (to compare the list of groups + carried in a hierarchy to a canonical list). INN comes with a + control.ctl file that processes control messages in most major public + hierarchies; if you don't want to act on all those control messages, you + should remove from that file all entries for hierarchies you don't want + to carry. + + You can tell INN to just authenticate control messages based on the From + header of the message, but this is obviously perilous and control + messages are widely forged. Many hierarchies sign all of their control + messages with PGP, allowing news servers to verify their authenticity, + and checking those signatures for hierarchies that use them is highly + recommended. controlchan knows how to do this (using pgpverify) without + additional configuration, but you do have to provide it with a public + key ring containing the public keys of all of the hierarchy + administrators whose control messages you want to check. + + INN expects the public key ring to either be in the default location for + a PGP public key ring for the news user (generally ~news/.gnupg for + GnuPG and ~news/.pgp for old PGP implementations), or in pathetc/pgp + (/usr/local/news/etc/pgp by default). The latter is the recommended + path. To add a key to that key ring, use: + + gpg --import --homedir=/usr/local/news/etc/pgp + + where is a file containing the hierarchy key. Change the homedir + setting to point to pathetc/pgp if you have INN installed in a + non-default location. If you're using the old-style PGP program, an + equivalent command is: + + env PGPPATH=/usr/local/news/etc/pgp pgp + + You can safely answer "no" to questions about whether you want to sign, + trust, or certify keys. + + The URLs from which you can get hierarchy keys are noted in comments in + control.ctl. tries to + collect the major hierarchy keys. + + If you are using GnuPG, please note that the first user ID on the key + will be the one that's used by INN for verification and must match the + key listed in control.ctl. If a hierarchy key has multiple user IDs, + you may have to remove all the user IDs except the one that matches the + control.ctl entry using "gpg --edit-key" and the "delkey" command. + diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..4d0fec5 --- /dev/null +++ b/LICENSE @@ -0,0 +1,87 @@ +INN as a whole and all code contained in it not otherwise marked with +different licenses and/or copyrights is covered by the following copyright +and license: + + Copyright (c) 2004, 2005, 2006, 2007, 2008 + by Internet Systems Consortium, Inc. ("ISC") + Copyright (c) 1991, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, + 2002, 2003 by The Internet Software Consortium and Rich Salz + + This code is derived from software contributed to the Internet Software + Consortium by Rich Salz. + + Permission to use, copy, modify, and distribute this software for any + purpose with or without fee is hereby granted, provided that the above + copyright notice and this permission notice appear in all copies. + + THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH + REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY + SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +Some specific portions of INN are covered by different licenses. Those +licenses, if present, will be noted prominantly at the top of those source +files. Specifically (but possibly not comprehensively): + + authprogs/smbval/*, backends/send-uucp.in, and control/perl-nocem.in + are under the GNU General Public License. See doc/GPL for a copy of + this license. + + backends/shrinkfile.c, frontends/scanspool.in, lib/concat.c, + lib/hstrerror.c, lib/inet_aton.c, lib/inet_ntoa.c, lib/memcmp.c, + lib/parsedate.y, lib/pread.c, lib/pwrite.c, lib/setenv.c, lib/seteuid.c, + lib/strerror.c, lib/strlcat.c and lib/strlcpy.c are in the public + domain. + + lib/snprintf.c may be used for any purpose as long as the author's + notice remains intact in all source code distributions. + + control/gpgverify.in, control/pgpverify.in and control/signcontrol.in + are under a BSD-style license (with the advertising clause) with UUNET + Technologies, Inc. as the copyright holder. See the end of those files + for details. + + control/controlchan.in and control/modules/*.pl are covered by a + two-clause BSD-style license (no advertising clause). See the + beginning of those files for details. + + lib/strcasecmp.c, lib/strspn.c, and lib/strtok.c are taken from BSD + sources and are covered by the standard BSD license. See those files + for more details. + + lib/md5.c is covered under the standard free MD5 license from RSA Data + Security. See the file for more details. A clarification is also + provided here: . + + "Implementations of these message-digest algorithms, including + implementations derived from the reference C code in RFC-1319, + RFC-1320, and RFC-1321, may be made, used, and sold without + license from RSA for any purpose." + + history/his.c and history/hisv6/hisv6.c are under a license very + similar to the new BSD license (no advertising clause) but with Thus + plc as the copyright holder. See those files for details. + + lib/tst.c, include/inn/tst.h and doc/pod/tst.pod are derived from + and are under the new BSD + license (no advertising clause), but with Peter A. Friend as the + copyright holder. + + tests/runtests.c is covered under a license very similar to the MIT/X + Consortium license. See the beginning of the file for details. + +Note that all portions of INN that link with core INN code have to be +covered by licenses compatible with the license at the top of this file, +and since INN links with several external libraries if so configured (such +as OpenSSL), should also be compatible with the licenses of those external +libraries to be safe. Some portions of this distribution are covered by +more restrictive licenses, but all of that code is completely stand-alone, +either as a standalone script or as code that compiles into a separate +executable. + +Please note that the files in the contrib directory are not properly part +of INN and may be under widely varying licenses. Please see each file +and/or its documentation for license information. diff --git a/MANIFEST b/MANIFEST new file mode 100644 index 0000000..d93dada --- /dev/null +++ b/MANIFEST @@ -0,0 +1,745 @@ +File Name Description +------------------------------------------------------------------------------- +CONTRIBUTORS List of contributors +HACKING Docs for INN coders and maintainers +INSTALL INN installation instructions +LICENSE Legal mumbo-jumbo +MANIFEST This shipping list +Makefile Top-level makefile +Makefile.global.in Make variables for all Makefiles +NEWS Changes since last version +README Introduction to the INN package +TODO The list of pending INN projects +aclocal.m4 M4 macro for libtool +authprogs The authentication programs (Directory) +authprogs/Makefile Makefile for auth programs +authprogs/auth_krb5.c Authenticator against Kerberos v5 +authprogs/auth_smb.c Authenticator against Samba servers +authprogs/ckpasswd.c Check password +authprogs/domain.c Get username from remote user's hostname +authprogs/ident.c Get username from ident +authprogs/libauth.c Library for talking to nnrpd +authprogs/libauth.h Interface for libauth +authprogs/radius.c Authenticator against RADIUS servers +authprogs/smbval The smb auth libraries (Directory) +authprogs/smbval/Makefile Libraries for smb auth +authprogs/smbval/byteorder.h Libraries for smb auth +authprogs/smbval/rfcnb-common.h Libraries for smb auth +authprogs/smbval/rfcnb-error.h Libraries for smb auth +authprogs/smbval/rfcnb-io.c Libraries for smb auth +authprogs/smbval/rfcnb-io.h Libraries for smb auth +authprogs/smbval/rfcnb-priv.h Libraries for smb auth +authprogs/smbval/rfcnb-util.c Libraries for smb auth +authprogs/smbval/rfcnb-util.h Libraries for smb auth +authprogs/smbval/rfcnb.h Libraries for smb auth +authprogs/smbval/session.c Libraries for smb auth +authprogs/smbval/smbdes.c Libraries for smb auth +authprogs/smbval/smbencrypt.c Libraries for smb auth +authprogs/smbval/smblib-common.h Libraries for smb auth +authprogs/smbval/smblib-priv.h Libraries for smb auth +authprogs/smbval/smblib-util.c Libraries for smb auth +authprogs/smbval/smblib.c Libraries for smb auth +authprogs/smbval/smblib.h Libraries for smb auth +authprogs/smbval/valid.c Libraries for smb auth +authprogs/smbval/valid.h Libraries for smb auth +backends Outgoing feed programs (Directory) +backends/Makefile Makefile for outgoing feed programs +backends/actmerge.in Merge two active files to stdout +backends/actsync.c Poll remote(s) for active file & merge +backends/actsyncd.in Daemon to run actsync periodically +backends/archive.c Simple article archiver +backends/batcher.c Make news batches +backends/buffchan.c Buffered funnel to file splitter +backends/crosspost.c Create links for crossposts (obselete) +backends/cvtbatch.c Add fields to simple batchfile +backends/filechan.c Split a funnel into separate files +backends/inndf.c df used for innwatch +backends/innxbatch.c Send batches using XBATCH to remote +backends/innxmit.c Send articles to remote site +backends/map.c Site name to filename mapping routines +backends/map.h Headers for backends/map.c +backends/mod-active.in Batch do active file modifications +backends/news2mail.in News to mail gateway +backends/ninpaths.c Path statistics accumulation program +backends/nntpget.c Get articles from remote site +backends/nntpsend.in Invoke all innxmit's at once +backends/overchan.c Update news overview database +backends/send-ihave.in Script to post ihave messages +backends/send-nntp.in Shell script to call innxmit +backends/send-uucp.in Script to call batcher +backends/sendinpaths.in Send accumulated Path statistics +backends/sendxbatches.in Shell wrapper around innxbatch +backends/shlock.c Program to make lockfiles in scripts +backends/shrinkfile.c Shrink file from beginning +configure Script to configure INN +configure.in Source file for configure +contrib External contributions (Directory) +contrib/Makefile Makefile for contrib programs +contrib/README Contents of the contrib directory +contrib/archivegz.in Compressing version of archive +contrib/auth_pass.README README corresponding to auth_pass.c +contrib/auth_pass.c Sample for use with AUTHINFO GENERIC +contrib/backlogstat.in Analyze innfeed's backlog status +contrib/backupfeed.in Suck down news via a reading connection +contrib/cleannewsgroups.in Script to clean newsgroups file +contrib/count_overview.pl Count overview entries +contrib/delayer.in Delay data in a pipe, for innfeed +contrib/expirectl.c Generate expire.ctl from template +contrib/findreadgroups.in Track which groups are being read +contrib/fixhist Script to clean history +contrib/innconfcheck Merge inn.conf with its man page +contrib/makeexpctl.in Create expire.ctl from read groups +contrib/makestorconf.in Create storage.conf from read groups +contrib/mkbuf Create cycbuff for HP-UX +contrib/mlockfile.c Lock files into memory using mlock +contrib/newsresp.c Measure responsiveness of remote server +contrib/pullart.c Recover articles from cyclic buffers +contrib/reset-cnfs.c Reset the state parts of a CNFS buffer +contrib/respool.c Respool articles in the storage manager +contrib/sample.init.script Example SysV-style init.d script +contrib/showtoken.in Decode storage API tokens +contrib/stathist.in Parse history statistics +contrib/thdexpire.in Dynamic expire for timehash and timecaf +contrib/tunefeed.in Tune a feed by comparing active files +control Control message handling (Directory) +control/Makefile Makefile for control programs +control/controlbatch.in Batch program for controlchan +control/controlchan.in Channel program for control messages +control/docheckgroups.in Script to execute checkgroups +control/gpgverify.in Verify control messages with GnuPG +control/modules Modules for controlchan (Directory) +control/modules/checkgroups.pl checkgroups controlchan handler +control/modules/ihave.pl ihave controlchan handler +control/modules/newgroup.pl newgroup controlchan handler +control/modules/rmgroup.pl rmgroup controlchan handler +control/modules/sendme.pl sendme controlchan handler +control/modules/sendsys.pl sendsys controlchan handler +control/modules/senduuname.pl senduuname controlchan handler +control/modules/version.pl version controlchan handler +control/perl-nocem.in NoCeM on spool implementation +control/pgpverify.in Verify control messages with PGP +control/signcontrol.in PGP control message signing program +doc Documentation (Directory) +doc/GPL The GNU General Public License 2.0 +doc/IPv6-info Nathan Lutchansky's IPv6 notes +doc/Makefile Makefile for documentation +doc/checklist Checklist for installing INN +doc/compliance-nntp INN compliance with the NNTP standard +doc/config-design Configuration parser design principles +doc/config-semantics Configuration file semantics +doc/config-syntax Configuration file syntax +doc/external-auth readers.conf external interface notes +doc/history Messages of historical significance +doc/hook-perl Christophe Wolfhugel's Perl hook notes +doc/hook-python Python hook notes +doc/hook-tcl Bob Halley's TCL hook notes +doc/man nroff documentation (Directory) +doc/man/Makefile Makefile for nroff documentation +doc/man/active.5 Manpage for active database +doc/man/active.times.5 Manpage for active.times file +doc/man/actsync.8 Manpage for active file synch program +doc/man/actsyncd.8 Manpage for active synch daemon +doc/man/archive.8 Manpage for archive backend +doc/man/auth_krb5.8 Manpage for auth_krb5 authenticator +doc/man/auth_smb.8 Manpage for auth_smb authenticator +doc/man/batcher.8 Manpage for batcher +doc/man/buffchan.8 Manpage for buffchan backend +doc/man/buffindexed.conf.5 Manpage for buffindexed.conf config file +doc/man/ckpasswd.8 Manpage for ckpasswd authenticator +doc/man/clientlib.3 Manpage for C News library interface +doc/man/cnfsheadconf.8 Manpage for cnfsheadconf +doc/man/cnfsstat.8 Manpage for cnfsstat +doc/man/control.ctl.5 Manpage for control.ctl config file +doc/man/controlchan.8 Manpage for controlchan backend +doc/man/convdate.1 Manpage for convdate utility +doc/man/ctlinnd.8 Manpage for ctlinnd frontend +doc/man/cvtbatch.8 Manpage for cvtbatch utility +doc/man/cycbuff.conf.5 Manpage for cycbuff.conf config file +doc/man/dbz.3 Manpage for DBZ database interface +doc/man/distrib.pats.5 Manpage for distrib.pats config file +doc/man/domain.8 Manpage for domain resolver +doc/man/expire.8 Manpage for expire +doc/man/expire.ctl.5 Manpage for expire.ctl config file +doc/man/expireover.8 Manpage for expireover +doc/man/expirerm.8 Manpage for expirerm +doc/man/fastrm.1 Manpage for fastrm utility +doc/man/filechan.8 Manpage for filechan backend +doc/man/getlist.1 Manpage for getlist frontend +doc/man/grephistory.1 Manpage for grephistory +doc/man/history.5 Manpage for history database +doc/man/ident.8 Manpage for ident resolver +doc/man/incoming.conf.5 Manpage for incoming.conf config file +doc/man/inews.1 Manpage for inews frontend +doc/man/inn.conf.5 Manpage for inn.conf config file +doc/man/inncheck.8 Manpage for inncheck utility +doc/man/innconfval.1 Manpage for innconfval +doc/man/innd.8 Manpage for innd server +doc/man/inndcomm.3 Manpage for part of INN library +doc/man/inndf.8 Manpage for inndf utility +doc/man/inndstart.8 Manpage for inndstart +doc/man/innfeed.1 Manpage for innfeed backend +doc/man/innfeed.conf.5 Manpage for innfeed.conf config file +doc/man/innmail.1 Manpage for innmail utility +doc/man/innreport.8 Manpage for innreport +doc/man/innstat.8 Manpage for innstat utility +doc/man/innupgrade.8 Manpage for innupgrade utility +doc/man/innwatch.8 Manpage for innwatch +doc/man/innwatch.ctl.5 Manpage for innwatch.ctl config file +doc/man/innxbatch.8 Manpage for innxbatch +doc/man/innxmit.8 Manpage for innxmit +doc/man/libauth.3 Manpage for authprogs utilty routines +doc/man/libinn.3 Manpage for INN library routines +doc/man/libinnhist.3 Manpage for history API library routines +doc/man/libstorage.3 Manpage for storage API library routines +doc/man/list.3 Manpage for list routines +doc/man/mailpost.8 Manpage for mailpost frontend +doc/man/makeactive.8 Manpage for makeactive +doc/man/makedbz.8 Manpage for makedbz +doc/man/makehistory.8 Manpage for makehistory +doc/man/mod-active.8 Manpage for mod-active +doc/man/moderators.5 Manpage for moderators config file +doc/man/motd.news.5 Manpage for motd.news config file +doc/man/news.daily.8 Manpage for news.daily +doc/man/news2mail.8 Manpage for news2mail +doc/man/newsfeeds.5 Manpage for newsfeeds config file +doc/man/newslog.5 Manpage for log files +doc/man/ninpaths.8 Manpage for ninpaths +doc/man/nnrpd.8 Manpage for nnrpd daemon +doc/man/nnrpd.track.5 Manpage for nnrpd.track config file +doc/man/nntpget.1 Manpage for nntpget frontend +doc/man/nntpsend.8 Manpage for nntpsend +doc/man/nntpsend.ctl.5 Manpage for nntpsend.ctl config file +doc/man/ovdb.5 Manpage for the ovdb storage module +doc/man/ovdb_init.8 Manpage for ovdb_init +doc/man/ovdb_monitor.8 Manpage for ovdb_monitor +doc/man/ovdb_server.8 Manpage for ovdb_server +doc/man/ovdb_stat.8 Manpage for ovdb_stat +doc/man/overchan.8 Manpage for overchan backend +doc/man/overview.fmt.5 Manpage for overview.fmt config file +doc/man/parsedate.3 Manpage for parsedate library routine +doc/man/passwd.nntp.5 Manpage for passwd.nntp config file +doc/man/perl-nocem.8 Manpage for perl-nocem +doc/man/pgpverify.1 Manpage for pgpverify +doc/man/prunehistory.8 Manpage for prunehistory +doc/man/pullnews.1 Manpage for pullnews +doc/man/putman.sh Install a manpage +doc/man/qio.3 Manpage for fast I/O file routines +doc/man/radius.8 Manpage for radius authenticator +doc/man/radius.conf.5 Manpage for radius.conf config file +doc/man/rc.news.8 Manpage for rc.news +doc/man/readers.conf.5 Manpage for readers.conf config file +doc/man/rnews.1 Manpage for rnews frontend +doc/man/sasl.conf.5 Manpage for sasl.conf config file +doc/man/scanlogs.8 Manpage for scanlogs +doc/man/send-nntp.8 Manpage for send-nntp and send-ihave +doc/man/send-uucp.8 Manpage for send-uucp +doc/man/sendinpaths.8 Manpage for sendinpaths +doc/man/shlock.1 Manpage for shlock backend utility +doc/man/shrinkfile.1 Manpage for shrinkfile utility +doc/man/simpleftp.1 Manpage for simpleftp utility +doc/man/sm.1 Manpage for sm +doc/man/startinnfeed.1 Manpage for startinnfeed +doc/man/storage.conf.5 Manpage for storage.conf config file +doc/man/subscriptions.5 Manpage for subscriptions list +doc/man/tally.control.8 Manpage for tally.control +doc/man/tdx-util.8 Manpage for tdx-util +doc/man/tst.3 Manpage for ternary search tree routines +doc/man/uwildmat.3 Manpage for uwildmat library routine +doc/man/writelog.8 Manpage for writelog +doc/pod POD documentation (Directory) +doc/pod/Makefile Maintainer rules for derived files +doc/pod/active.pod Master file for active.5 +doc/pod/active.times.pod Master file for active.times.5 +doc/pod/auth_krb5.pod Master file for auth_krb5.8 +doc/pod/auth_smb.pod Master file for auth_smb.8 +doc/pod/checklist.pod Master file for doc/checklist +doc/pod/ckpasswd.pod Master file for ckpasswd.8 +doc/pod/control.ctl.pod Master file for control.ctl.5 +doc/pod/convdate.pod Master file for convdate.1 +doc/pod/cycbuff.conf.pod Master file for cycbuff.conf.5 +doc/pod/distrib.pats.pod Master file for distrib.pats.5 +doc/pod/domain.pod Master file for domain.8 +doc/pod/expire.ctl.pod Master file for expire.ctl.5 +doc/pod/expireover.pod Master file for expireover.8 +doc/pod/external-auth.pod Master file for doc/external-auth +doc/pod/fastrm.pod Master file for fastrm.1 +doc/pod/grephistory.pod Master file for grephistory.1 +doc/pod/hacking.pod Master file for HACKING +doc/pod/hook-perl.pod Master file for doc/hook-perl +doc/pod/hook-python.pod Master file for doc/hook-python +doc/pod/ident.pod Master file for ident.8 +doc/pod/inews.pod Master file for inews.1 +doc/pod/inn.conf.pod Master file for inn.conf.5 +doc/pod/innconfval.pod Master file for innconfval.1 +doc/pod/innd.pod Master file for innd.8 +doc/pod/inndf.pod Master file for inndf.8 +doc/pod/inndstart.pod Master file for inndstart.8 +doc/pod/innmail.pod Master file for innmail.1 +doc/pod/innupgrade.pod Master file for innupgrade.8 +doc/pod/install.pod Master file for INSTALL +doc/pod/libauth.pod Master file for libauth.3 +doc/pod/libinnhist.pod Master file for libinnhist.3 +doc/pod/list.pod Master file for list.3 +doc/pod/mailpost.pod Master file for mailpost.8 +doc/pod/makehistory.pod Master file for makehistory.8 +doc/pod/motd.news.pod Master file for motd.news.5 +doc/pod/news.pod Master file for NEWS +doc/pod/newsfeeds.pod Master file for newsfeeds.5 +doc/pod/ninpaths.pod Master file for ninpaths.8 +doc/pod/nnrpd.pod Master file for nnrpd.8 +doc/pod/ovdb.pod Master file for ovdb.5 +doc/pod/ovdb_init.pod Master file for ovdb_init.8 +doc/pod/ovdb_monitor.pod Master file for ovdb_monitor.8 +doc/pod/ovdb_server.pod Master file for ovdb_server.8 +doc/pod/ovdb_stat.pod Master file for ovdb_stat.8 +doc/pod/passwd.nntp.pod Master file for passwd.nntp.5 +doc/pod/pullnews.pod Master file for pullnews.1 +doc/pod/qio.pod Master file for qio.3 +doc/pod/radius.conf.pod Master file for radius.conf.5 +doc/pod/radius.pod Master file for radius.8 +doc/pod/rc.news.pod Master file for rc.news.8 +doc/pod/readers.conf.pod Master file for readers.conf.5 +doc/pod/readme.pod Master file for README +doc/pod/sasl.conf.pod Master file for sasl.conf.5 +doc/pod/sendinpaths.pod Master file for sendinpaths.8 +doc/pod/simpleftp.pod Master file for simpleftp.1 +doc/pod/sm.pod Master file for sm.1 +doc/pod/subscriptions.pod Master file for subscriptions.5 +doc/pod/tdx-util.pod Master file for tdx-util.8 +doc/pod/tst.pod Master file for tst.3 +doc/pod/uwildmat.pod Master file for uwildmat.3 +doc/sample-control Sample PGP-signed control message +expire Expiration and recovery (Directory) +expire/Makefile Makefile for expiration +expire/convdate.c Date string conversions +expire/expire.c Expire old articles and history lines +expire/expireover.c Expire news overview data +expire/expirerm.in Remove articles from expire -z +expire/fastrm.c Remove list of files +expire/grephistory.c Find entries in history database +expire/makedbz.c Recover dbz +expire/makehistory.c Recover the history database +expire/prunehistory.c Prune file names from history file +frontends inews, rnews, ctlinnd (Directory) +frontends/Makefile Makefile for frontends +frontends/cnfsheadconf.in Setup cycbuff header +frontends/cnfsstat.in Show cycbuff status +frontends/ctlinnd.c Send control request to innd +frontends/decode.c Decode 7-bit data into binary file +frontends/encode.c Encode binary file into 7-bit data +frontends/feedone.c Test rig to feed a single NNTP article +frontends/getlist.c Get active or other list from server +frontends/inews.c Send article to local NNTP server +frontends/innconfval.c Get an INN configuration parameter +frontends/mailpost.in Mail to news gateway +frontends/ovdb_init.c Prepare ovdb database for use +frontends/ovdb_monitor.c Database maintainance for ovdb +frontends/ovdb_server.c Helper server for ovdb +frontends/ovdb_stat.c Display information from ovdb database +frontends/pullnews.in Sucking news feeder +frontends/rnews.c UUCP unbatcher +frontends/scanspool.in Scan spool directory for trash +frontends/sm.c Get article or overview data from token +frontends/sys2nf.c Sys file to newsfeeds conversion aid +history History library routines (Directory) +history/Make.methods Makefile for history methods +history/Makefile Makefile for history library +history/buildconfig.in Construct history interface +history/his.c History API glue implementation +history/hisinterface.h History API interface +history/hisv6 History v6 method (Directory) +history/hisv6/hismethod.config hisbuildconfig definition +history/hisv6/hisv6-private.h Private header file for hisv6 +history/hisv6/hisv6.c hisv6 history method +history/hisv6/hisv6.h Header for hisv6 history +include Header files (Directory) +include/Makefile Makefile for header files +include/acconfig.h Master file for config.h.in +include/clibrary.h C library portability +include/conffile.h Header file for reading *.conf files +include/config.h.in Template configuration data +include/dbz.h Header file for DBZ +include/inn Installed header files (Directory) +include/inn/buffer.h Header file for reusable counted buffers +include/inn/confparse.h Header file for configuration parser +include/inn/defines.h Portable defs for installed headers +include/inn/hashtab.h Header file for generic hash table +include/inn/history.h Header file for the history API +include/inn/innconf.h Header file for the innconf struct +include/inn/list.h Header file for list routines +include/inn/md5.h Header file for MD5 digests +include/inn/messages.h Header file for message functions +include/inn/mmap.h Header file for mmap() functions +include/inn/qio.h Header file for quick I/O package +include/inn/sequence.h Header file for sequence space arithmetic +include/inn/timer.h Header file for generic timers +include/inn/tst.h Header file for ternary search tries +include/inn/vector.h Header file for vectors of strings +include/inn/wire.h Header file for wire-format functions +include/inndcomm.h innd control channel commands +include/innperl.h Header file for embedded Perl +include/libinn.h INN library declarations +include/nntp.h NNTP command and reply codes +include/ov.h Header file for overview +include/paths.h.in Paths to common programs and files +include/portable Portability wrappers (Directory) +include/portable/mmap.h Wrapper for +include/portable/setproctitle.h Portable setup for setproctitle +include/portable/socket.h Wrapper for and friends +include/portable/time.h Wrapper for and +include/portable/wait.h Wrapper for +include/ppport.h Header file for Perl support +include/storage.h Header file for storage API +innd Server (Directory) +innd/Makefile Makefile for server +innd/art.c Process a received article +innd/cc.c Control channel routines +innd/chan.c I/O channel routines +innd/icd.c Read and write the active file +innd/innd.c Main and utility routines +innd/innd.h Header file for server +innd/inndstart.c Open the NNTP port, then exec innd +innd/keywords.c Generate article keywords +innd/lc.c Local NNTP channel routines +innd/nc.c NNTP channel routines +innd/newsfeeds.c Routines to parse the newsfeeds file +innd/ng.c Newsgroup routines +innd/perl.c Perl routines for innd +innd/proc.c Process routines +innd/python.c Python routines for innd +innd/rc.c Remote channel accepting routines +innd/site.c Site feeding routines +innd/status.c Status routines for innd +innd/tcl.c Bob Halley's Tcl hook +innd/util.c Utility functions for innd +innd/wip.c Work-in-progress routines for innd +innfeed innfeed (Directory) +innfeed/Makefile Makefile for innfeed +innfeed/README Assorted notes +innfeed/article.c Implementation of the Article class +innfeed/article.h Public interface to Articles +innfeed/buffer.c Implementation of the Buffer class +innfeed/buffer.h Public interface to the Buffer class +innfeed/config_l.c Lexer for the innfeed config file +innfeed/configfile.h Header file for configfile.y +innfeed/configfile.l Master file for config_l.c +innfeed/configfile.y Parser for innfeed config file +innfeed/connection.c Implementation of the Connection class +innfeed/connection.h Public interface to the Connection class +innfeed/endpoint.c Implementation of the EndPoint class +innfeed/endpoint.h Public interface to the EndPoint class +innfeed/host.c Implementation of the Host class +innfeed/host.h Public interface to the Host class +innfeed/imap_connection.c Implementation of IMAP Connection class +innfeed/innfeed-convcfg.in Script to convert old innfeed.conf +innfeed/innfeed.h Application configuration values +innfeed/innlistener.c Implementation of the InnListener class +innfeed/innlistener.h Public interface of InnListener class +innfeed/main.c Main routines for the innfeed program +innfeed/misc.c Miscelloneous routines for innfeed +innfeed/misc.h Header file for misc.c +innfeed/procbatch.in Script to process dropped articles +innfeed/startinnfeed.c Start innfeed +innfeed/tape.c Implementation of the Tape class +innfeed/tape.h Public interface to the Tape class +innfeed/testListener.pl Script to hand articles to innfeed +lib INN library routines (Directory) +lib/Makefile Makefile for library +lib/buffer.c Reusable counted buffer +lib/cleanfrom.c Clean out a From line +lib/clientactive.c Client access to the active file +lib/clientlib.c Replacement for C News library routine +lib/concat.c Concatenate strings with dynamic memory +lib/conffile.c Routines for reading *.conf files +lib/confparse.c Generic configuration file parser +lib/daemonize.c Code necessary to become a daemon +lib/date.c Date parsing and conversion routines +lib/dbz.c DBZ database library +lib/defdist.c Determine default Distribution header +lib/fdflags.c Set or clear file descriptor flags +lib/fdlimit.c File descriptor limits +lib/fseeko.c fseeko replacement +lib/ftello.c ftello replacement +lib/genid.c Generate a message ID +lib/getfqdn.c Get FQDN of local host +lib/getmodaddr.c Get a moderator's address +lib/getpagesize.c getpagesize replacement +lib/gettime.c Get time and timezone info +lib/hash.c Create hash from message ID +lib/hashtab.c Generic hash table +lib/hstrerror.c Error reporting for resolver +lib/inet_aton.c Extra source for inet_aton routine +lib/inet_ntoa.c Convert inaddr to string (BSD) +lib/innconf.c Parsing and manipulation of inn.conf +lib/inndcomm.c Library routines to talk to innd +lib/list.c List routines +lib/localopen.c Open a local NNTP connection +lib/lockfile.c Try to lock a file descriptor +lib/makedir.c Make directory recursively +lib/md5.c MD5 checksum calculation +lib/memcmp.c memcmp replacement +lib/messages.c Error reporting and debug output +lib/mkstemp.c mkstemp replacement +lib/mmap.c mmap manipulation routines +lib/parsedate.y Date parsing +lib/perl.c Perl hook support for nnrpd and innd +lib/pread.c pread replacement +lib/pwrite.c pwrite replacement +lib/qio.c Quick I/O package +lib/radix32.c Encode a number as a radix-32 string +lib/readin.c Read file into memory +lib/remopen.c Open a remote NNTP connection +lib/reservedfd.c File descriptor reservation +lib/resource.c Get process CPU usage +lib/sendarticle.c Send an article, NNTP style +lib/sendpass.c Send NNTP authentication +lib/sequence.c Sequence space arithmetic routines +lib/setenv.c setenv replacement +lib/seteuid.c seteuid replacement +lib/setproctitle.c setproctitle replacement +lib/snprintf.c snprintf and vsnprintf replacement +lib/sockaddr.c Manipulation of sockaddr structs +lib/strcasecmp.c Case-insenstive string comparison (BSD) +lib/strerror.c String representation of errno +lib/strlcat.c strlcat replacement +lib/strlcpy.c strlcpy replacement +lib/strspn.c Skip bytes in a string (BSD) +lib/strtok.c Split a string into tokens (BSD) +lib/timer.c Generic profile timer +lib/tst.c Ternary search trie implementation +lib/uwildmat.c Pattern match routine +lib/vector.c Manipulate vectors of strings +lib/version.c INN version constants +lib/wire.c Manipulate wire-format articles +lib/xfopena.c Open a FILE in append mode +lib/xmalloc.c Failsafe memory allocation wrapper +lib/xsignal.c signal() wrapper using sigaction +lib/xwrite.c write that handles partial transfers +nnrpd Reader server (Directory) +nnrpd/Makefile Makefile for nnrpd +nnrpd/article.c Article-related routines +nnrpd/cache.c MessageID cache routines +nnrpd/cache.h MessageID cache interfaces +nnrpd/commands.c Assorted server commands +nnrpd/group.c Group-related routines +nnrpd/line.c Long line-by-line reading routines +nnrpd/list.c The LIST commands +nnrpd/misc.c Miscellaneous support routines +nnrpd/newnews.c The NEWNEWS command +nnrpd/nnrpd.c Main and some utility routines +nnrpd/nnrpd.h Header file for nnrpd +nnrpd/perl.c Perl routines for nnrpd +nnrpd/perm.c Reading readers.conf +nnrpd/post.c Article processing and posting +nnrpd/post.h Article data types +nnrpd/python.c Python routines for nnrpd +nnrpd/sasl_config.c Configuration for SASL +nnrpd/sasl_config.h SASL data types +nnrpd/tls.c Transport layer security +nnrpd/tls.h Transport layer security data types +nnrpd/track.c Track client behavior +samples Prototype INN config files (Directory) +samples/INN.py Stub Python functions +samples/Makefile Makefile for samples +samples/active.minimal Minimal starting active file +samples/actsync.cfg Config file for actsync +samples/actsync.ign Ignore file for actsync +samples/buffindexed.conf Buffindexed overview config file +samples/control.ctl Access control for control messages +samples/cycbuff.conf Sample cycbuff.conf file +samples/distrib.pats Default values for Distribution header +samples/expire.ctl Expiration config file +samples/filter.tcl Sample Tcl filter for innd +samples/filter_innd.pl Sample Perl filter for innd +samples/filter_innd.py Sample Python filter for innd +samples/filter_nnrpd.pl Sample Perl filter for nnrpd +samples/incoming.conf Permissions for incoming feeds +samples/inn.conf.in General INN configuration +samples/innfeed.conf Outgoing feed configuration +samples/innreport.conf.in Log summary configuration +samples/innwatch.ctl INN monitoring configuration +samples/moderators Moderation submission addresses +samples/motd.news Sample MOTD file +samples/news2mail.cf news2mail config file +samples/newsfeeds.in innd feed configuration +samples/newsgroups.minimal Minimal starting newsgroups file +samples/nnrpd.py Python hooks for nnrpd +samples/nnrpd.track Reader tracking configuration +samples/nnrpd_access.pl.in Sample nnrpd Perl access hooks +samples/nnrpd_access.py Sample nnrpd Python access hooks +samples/nnrpd_access_wrapper.pl.in Wrapper around old Perl access hooks +samples/nnrpd_access_wrapper.py Wrapper around old Python access hooks +samples/nnrpd_auth.pl.in Sample nnrpd Perl authorization hooks +samples/nnrpd_auth.py Sample nnrpd Python authorization hooks +samples/nnrpd_auth_wrapper.pl.in Wrapper around old Perl auth hooks +samples/nnrpd_auth_wrapper.py Wrapper around old Python auth hooks +samples/nnrpd_dynamic.py Sample nnrpd Python dynamic access hooks +samples/nnrpd_dynamic_wrapper.py Wrapper around old Python dynamic hooks +samples/nntpsend.ctl Outgoing nntpsend feed configuration +samples/ovdb.conf Berkeley DB overview configuration +samples/overview.fmt Format of news overview database +samples/passwd.nntp Passwords for remote connections +samples/radius.conf Sample config for RADIUS authentication +samples/readers.conf Reader connection configuration +samples/sasl.conf.in SASL configuration +samples/startup.tcl Tcl startup code for innd +samples/startup_innd.pl Perl startup code for innd +samples/storage.conf Sample storage configuration +samples/subscriptions Sample default subscriptions list +scripts Various utilities (Directory) +scripts/Makefile Makefile for script files +scripts/inncheck.in Syntax-check INN config files +scripts/innmail.in Perl front-end to sendmail +scripts/innreport.in Script to analyze INN logs +scripts/innreport_inn.pm Config file for innreport +scripts/innshellvars.in Config parameters for shell scripts +scripts/innshellvars.pl.in Config parameters for Perl scripts +scripts/innshellvars.tcl.in Config parameters for Tcl scripts +scripts/innstat.in Display INN status snapshot +scripts/innupgrade.in Upgrade INN configuration files +scripts/innwatch.in Throttle innd based on load and space +scripts/news.daily.in Front-end script to run expire, etc. +scripts/rc.news.in News boot script +scripts/scanlogs.in Summarize log files +scripts/simpleftp.in Rudimentary ftp client +scripts/tally.control.in Count newgroup/rmgroup messages +scripts/writelog.in Write a log entry or mail it +site Site-local files (Directory) +site/Makefile Makefile for site-local files +site/getsafe.sh Safely get config file from samples +storage Storage library (Directory) +storage/Make.methods Makefile for storage methods +storage/Makefile Makefile for storage library +storage/buffindexed buffindexed overview method (Directory) +storage/buffindexed/buffindexed.c buffindexed overview routines +storage/buffindexed/buffindexed.h Header file for buffindexed overview +storage/buffindexed/ovmethod.config buildconfig definition +storage/buffindexed/ovmethod.mk Make rules for buffindexed overview +storage/buildconfig.in Construct method interface +storage/cnfs CNFS storage method (Directory) +storage/cnfs/cnfs-private.h Private header file for CNFS +storage/cnfs/cnfs.c CNFS storage routines +storage/cnfs/cnfs.h Header file for CNFS +storage/cnfs/method.config buildconfig definition +storage/expire.c Overview-drive expire implementation +storage/interface.c Storage API glue implementation +storage/interface.h Storage API interface +storage/ov.c Overview API glue implementation +storage/ovdb ovdb overview method (Directory) +storage/ovdb/ovdb-private.h Private header file for ovdb +storage/ovdb/ovdb.c ovdb (Berkeley DB) overview method +storage/ovdb/ovdb.h Header for ovdb (Berkeley DB) overview +storage/ovdb/ovmethod.config buildconfig definition +storage/overdata.c Overview data manipulation +storage/ovinterface.h Overview API interface +storage/timecaf timecaf storage method (Directory) +storage/timecaf/README.CAF README the CAF file format +storage/timecaf/caf.c CAF file implementation +storage/timecaf/caf.h Header for CAF files +storage/timecaf/method.config buildconfig definition +storage/timecaf/timecaf.c timecaf storage routines +storage/timecaf/timecaf.h Header file for timecaf +storage/timehash timehash storage method (Directory) +storage/timehash/method.config buildconfig definition +storage/timehash/timehash.c timehash storage routines +storage/timehash/timehash.h Header for timehash +storage/tradindexed tradindexed overview method (Directory) +storage/tradindexed/ovmethod.config buildconfig definition +storage/tradindexed/ovmethod.mk Make rules for tradindexed overview +storage/tradindexed/tdx-cache.c Data file cache handling for tradindexed +storage/tradindexed/tdx-data.c Data file handling for tradindexed +storage/tradindexed/tdx-group.c Group index handling for tradindexed +storage/tradindexed/tdx-private.h Private header file for tradindexed +storage/tradindexed/tdx-structure.h On disk layout of tradindexed files +storage/tradindexed/tdx-util.c Utility program for tradindexed +storage/tradindexed/tradindexed.c Interface code for the overview API +storage/tradindexed/tradindexed.h Interface for tradindexed +storage/tradspool tradspool storage method (Directory) +storage/tradspool/README.tradspool Docs for tradspool storage method +storage/tradspool/method.config buildconfig definition +storage/tradspool/tradspool.c tradspool storage routines +storage/tradspool/tradspool.h Header for tradspool +storage/trash Trash storage method (Directory) +storage/trash/method.config buildconfig definition +storage/trash/trash.c Trash storage routines +storage/trash/trash.h Header file for trash storage +support Tools for building INN (Directory) +support/config.guess Determine system type for libtool +support/config.sub Canonicalize system type for libtool +support/fixscript.in Interpreter path fixup script +support/indent A mostly working wrapper around indent +support/install-sh Installation utility +support/ltmain.sh Source for libtool utility +support/makedepend Generate dependencies for C files +support/mkchangelog Generate ChangeLog from CVS +support/mkmanifest Generate a list of files for the manifest +support/mksnapshot Generate a snapshot of the tree +support/mksystem Generate from config.h +support/mkversion Generate with INN version +tests Test suite for INN (Directory) +tests/Makefile Makefile for test suite +tests/TESTS List of tests to run +tests/authprogs Test suite for auth programs (Directory) +tests/authprogs/ckpasswd.t Tests for authprogs/ckpasswd +tests/authprogs/domain.t Tests for authprogs/domain +tests/authprogs/passwd Password data for ckpasswd tests +tests/lib Test suite for libinn (Directory) +tests/lib/articles Testing news articles (Directory) +tests/lib/articles/no-body An article without a body +tests/lib/articles/strange An article with CR and LF in headers +tests/lib/articles/truncated An article truncated in the headers +tests/lib/buffer-t.c Tests for lib/buffer.c +tests/lib/concat-t.c Tests for lib/concat.c +tests/lib/config Testing config files (Directory) +tests/lib/config/errors Various config files with errors +tests/lib/config/line-endings A config file with varied line endings +tests/lib/config/no-newline A config file without an ending newline +tests/lib/config/null A config file containing a nul character +tests/lib/config/simple A simple config file +tests/lib/config/valid Various valid config parameters +tests/lib/config/warn-bool Invalid boolean parameters +tests/lib/config/warn-int Invalid integer parameters +tests/lib/config/warnings Various config files with warnings +tests/lib/confparse-t.c Tests for lib/confparse.c +tests/lib/date-t.c Tests for lib/date.c +tests/lib/fakewrite.c Helper functions for xwrite tests +tests/lib/hash-t.c Tests for lib/hash.c +tests/lib/hashtab-t.c Tests for lib/hashtab.c +tests/lib/hstrerror-t.c Tests for lib/hstrerror.c +tests/lib/inet_aton-t.c Tests for lib/inet_aton.c +tests/lib/inet_ntoa-t.c Tests for lib/inet_ntoa.c +tests/lib/innconf-t.c Tests for lib/innconf.c +tests/lib/list-t.c Tests for lib/list.c +tests/lib/md5-t.c Tests for lib/md5.c +tests/lib/memcmp-t.c Tests for lib/memcmp.c +tests/lib/messages-t.c Tests for lib/messages.c +tests/lib/mkstemp-t.c Tests for lib/mkstemp.c +tests/lib/pread-t.c Tests for lib/pread.c +tests/lib/pwrite-t.c Tests for lib/pwrite.c +tests/lib/qio-t.c Tests for lib/qio.c +tests/lib/setenv-t.c Tests for lib/setenv.c +tests/lib/setenv.t Wrapper for setenv tests +tests/lib/snprintf-t.c Tests for lib/snprintf.c +tests/lib/strerror-t.c Tests for lib/strerror.c +tests/lib/strlcat-t.c Tests for lib/strlcat.c +tests/lib/strlcpy-t.c Tests for lib/strlcpy.c +tests/lib/tst-t.c Tests for lib/tst.c +tests/lib/uwildmat-t.c Tests for lib/uwildmat.c +tests/lib/vector-t.c Tests for lib/vector.c +tests/lib/wire-t.c Tests for lib/wire.c +tests/lib/xmalloc.c Helper program for xmalloc tests +tests/lib/xmalloc.t Tests for lib/xmalloc.c +tests/lib/xwrite-t.c Tests for lib/xwrite.c +tests/libtest.c Helper library for writing tests +tests/libtest.h Interface to libtest +tests/overview Test suite for overview (Directory) +tests/overview/data Test overview data (Directory) +tests/overview/data/basic Basic set of overview test data +tests/overview/data/bogus Bad newsgroup name test data +tests/overview/data/high-numbered High-numbered article test data +tests/overview/data/reversed Same as basic, but in reverse order +tests/overview/munge-data Support script to generate test data +tests/overview/tradindexed-t.c Tests for storage/tradindexed/* +tests/runtests.c The test suite driver program +tests/util Test suite for utilities (Directory) +tests/util/convdate.t Tests for expire/convdate diff --git a/Makefile b/Makefile new file mode 100644 index 0000000..13e27b5 --- /dev/null +++ b/Makefile @@ -0,0 +1,218 @@ +## $Id: Makefile 7488 2005-12-25 00:26:08Z eagle $ + +include Makefile.global + +## All installation directories except for $(PATHRUN), which has a +## different mode than the rest. +INSTDIRS = $(PATHNEWS) $(PATHBIN) $(PATHAUTH) $(PATHAUTHRESOLV) \ + $(PATHAUTHPASSWD) $(PATHCONTROL) $(PATHFILTER) \ + $(PATHRNEWS) $(PATHDB) $(PATHDOC) $(PATHETC) $(PATHLIB) \ + $(PATHMAN) $(MAN1) $(MAN3) $(MAN5) $(MAN8) $(PATHSPOOL) \ + $(PATHTMP) $(PATHARCHIVE) $(PATHARTICLES) $(PATHINCOMING) \ + $(PATHINBAD) $(PATHTAPE) $(PATHOVERVIEW) $(PATHOUTGOING) \ + $(PATHLOG) $(PATHLOG)/OLD $(PATHINCLUDE) + +## LIBDIRS are built before PROGDIRS, make update runs in all UPDATEDIRS, +## and make install runs in all ALLDIRS. Nothing runs in test except the +## test target itself and the clean targets. Currently, include is built +## before anything else but nothing else runs in it except clean targets. +LIBDIRS = include lib storage history +PROGDIRS = innd nnrpd innfeed control expire frontends backends authprogs \ + scripts +UPDATEDIRS = $(LIBDIRS) $(PROGDIRS) doc +ALLDIRS = $(UPDATEDIRS) samples site +CLEANDIRS = $(ALLDIRS) include tests + +## The directory name and tar file to use when building a release. +TARDIR = inn-$(VERSION) +TARFILE = $(TARDIR).tar + +## The directory to use when building a snapshot. +SNAPDIR = inn-$(SNAPSHOT)-$(SNAPDATE) + +## DISTDIRS gets all directories from the MANIFEST, and DISTFILES gets all +## regular files. Anything not listed in the MANIFEST will not be included +## in a distribution. These are arguments to sed. +DISTDIRS = -e 1,2d -e '/(Directory)/!d' -e 's/ .*//' -e 's;^;$(TARDIR)/;' +SNAPDIRS = -e 1,2d -e '/(Directory)/!d' -e 's/ .*//' -e 's;^;$(SNAPDIR)/;' +DISTFILES = -e 1,2d -e '/(Directory)/d' -e 's/ .*//' + + +## Major target -- build everything. Rather than just looping through +## all the directories, use a set of parallel rules so that make -j can +## work on more than one directory at a time. +all: all-include all-libraries all-programs + cd doc && $(MAKE) all + cd samples && $(MAKE) all + cd site && $(MAKE) all + +all-libraries: all-lib all-storage all-history + +all-include: ; cd include && $(MAKE) all +all-lib: all-include ; cd lib && $(MAKE) all +all-storage: all-lib ; cd storage && $(MAKE) library +all-history: all-storage ; cd history && $(MAKE) all + +all-programs: all-innd all-nnrpd all-innfeed all-control all-expire \ + all-frontends all-backends all-authprogs all-scripts \ + all-store-util + +all-authprogs: all-lib ; cd authprogs && $(MAKE) all +all-backends: all-libraries ; cd backends && $(MAKE) all +all-control: ; cd control && $(MAKE) all +all-expire: all-libraries ; cd expire && $(MAKE) all +all-frontends: all-libraries ; cd frontends && $(MAKE) all +all-innd: all-libraries ; cd innd && $(MAKE) all +all-innfeed: all-libraries ; cd innfeed && $(MAKE) all +all-nnrpd: all-libraries ; cd nnrpd && $(MAKE) all +all-scripts: ; cd scripts && $(MAKE) all +all-store-util: all-libraries ; cd storage && $(MAKE) programs + + +## If someone tries to run make before running configure, tell them to run +## configure first. +Makefile.global: + @echo 'Run ./configure before running make. See INSTALL for details.' + @exit 1 + + +## Installation rules. make install installs everything; make update only +## updates the binaries, scripts, and documentation and leaves config +## files alone. +install: directories + @for D in $(ALLDIRS) ; do \ + echo '' ; \ + cd $$D && $(MAKE) install || exit 1 ; cd .. ; \ + done + @echo '' + @echo 'If this is a first-time installation, a minimal active file and' + @echo 'history database have been installed. Do not forget to update' + @echo 'your cron entries and configure INN. See INSTALL for more' + @echo 'information.' + @echo '' + +directories: + @chmod +x support/install-sh + for D in $(INSTDIRS) ; do \ + support/install-sh $(OWNER) -m 0755 -d $(D)$$D ; \ + done + support/install-sh $(OWNER) -m 0750 -d $(D)$(PATHRUN) + +update: + @chmod +x support/install-sh + @for D in $(UPDATEDIRS) ; do \ + echo '' ; \ + cd $$D && $(MAKE) install || exit 1 ; cd .. ; \ + done + $(PATHBIN)/innupgrade $(PATHETC) + +## Install a certificate for TLS/SSL support. +cert: + $(SSLBIN)/openssl req -new -x509 -nodes \ + -out $(PATHLIB)/cert.pem -days 366 \ + -keyout $(PATHLIB)/key.pem + chown $(NEWSUSER) $(PATHLIB)/cert.pem + chgrp $(NEWSGROUP) $(PATHLIB)/cert.pem + chmod 640 $(PATHLIB)/cert.pem + chown $(NEWSUSER) $(PATHLIB)/key.pem + chgrp $(NEWSGROUP) $(PATHLIB)/key.pem + chmod 600 $(PATHLIB)/key.pem + + +## Cleanup targets. clean deletes all compilation results but leaves the +## configure results. distclean or clobber removes everything not part of +## the distribution tarball. maintclean removes some additional files +## created as part of the release process. +clean: + @for D in $(CLEANDIRS) ; do \ + echo '' ; \ + cd $$D && $(MAKE) clean || exit 1 ; cd .. ; \ + done + +clobber realclean distclean: + @for D in $(CLEANDIRS) ; do \ + echo '' ; \ + cd $$D && $(MAKE) $(FLAGS) clobber && cd .. ; \ + done + @echo '' + rm -f LIST.* Makefile.global TAGS tags config.cache config.log + rm -f config.status libtool support/fixscript + +maintclean: distclean + rm -rf $(TARDIR) + rm -f CHANGES ChangeLog inn*.tar.gz + + +## Other generic targets. +depend tags ctags profiled: + @for D in $(ALLDIRS) ; do \ + echo '' ; \ + cd $$D && $(MAKE) $@ || exit 1 ; cd .. ; \ + done + +TAGS etags: + etags */*.c */*.h */*/*.c */*/*.h + + +## Run the test suite. +check test tests: + cd tests && $(MAKE) test + + +## For maintainers, build the entire source base with warnings enabled. +warnings: + $(MAKE) COPT="$(WARNINGS) $(COPT)" all + + +## Make a release. We create a release by recreating the directory +## structure and then copying over all files listed in the MANIFEST. If it +## isn't in the MANIFEST, it doesn't go into the release. We also update +## the version information in Makefile.global.in to remove the prerelease +## designation and update all timestamps to the date the release is made. +release: ChangeLog + rm -rf $(TARDIR) + rm -f inn*.tar.gz + mkdir $(TARDIR) + for d in `sed $(DISTDIRS) MANIFEST` ; do mkdir -p $$d ; done + for f in `sed $(DISTFILES) MANIFEST` ; do \ + cp $$f $(TARDIR)/$$f || exit 1 ; \ + done + sed 's/= prerelease/=/' < Makefile.global.in \ + > $(TARDIR)/Makefile.global.in + cp ChangeLog $(TARDIR) + find $(TARDIR) -type f -print | xargs touch -t `date +%m%d%H%M.%S` + tar cf $(TARFILE) $(TARDIR) + $(GZIP) -9 $(TARFILE) + +## Generate the ChangeLog using support/mkchangelog. This should only be +## run by a maintainer since it depends on cvs log working and also +## requires cvs2cl be available somewhere. +ChangeLog: + $(PERL) support/mkchangelog + + +## Check the MANIFEST against the files present in the current tree, +## building a list with find and running diff between the lists. +check-manifest: + sed -e 1,2d -e 's/ .*//' MANIFEST > LIST.manifest + $(PERL) support/mkmanifest > LIST.real + diff -u LIST.manifest LIST.real + + +## Make a snapshot. This is like making a release, except that we don't do +## the ChangeLog thing and we don't change the version number. We also +## assume that SNAPSHOT has been set to the appropriate current branch. +snapshot: + rm -rf $(SNAPDIR) + rm -f inn*.tar.gz + mkdir $(SNAPDIR) + set -e ; for d in `sed $(SNAPDIRS) MANIFEST` ; do mkdir -p $$d ; done + set -e ; for f in `sed $(DISTFILES) MANIFEST` ; do \ + cp $$f $(SNAPDIR)/$$f ; \ + done + cp README.snapshot $(SNAPDIR)/ + sed 's/= prerelease/= $(SNAPDATE) snapshot/' \ + Makefile.global.in > $(SNAPDIR)/Makefile.global.in + find $(SNAPDIR) -type f -print | xargs touch -t `date +%m%d%H%M.%S` + tar cf $(SNAPDIR).tar $(SNAPDIR) + $(GZIP) -9 $(SNAPDIR).tar diff --git a/Makefile.global.in b/Makefile.global.in new file mode 100644 index 0000000..3f9cf58 --- /dev/null +++ b/Makefile.global.in @@ -0,0 +1,286 @@ +## $Id: Makefile.global.in 7830 2008-05-14 18:57:39Z iulius $ +## +## This file is meant to be the central Makefile that configure works with +## and that all other Makefiles include. No Makefile other than this one +## should have to be a configure substitution target. +## +## For installation paths, see the bottom of this file. + +## This version information is used to generate lib/version.c and is used +## by INN for banner and local version identification. The version +## identification string will be "$VERSION ($VERSION_EXTRA)", with the +## parentheses omitted if $VERSION_EXTRA is empty (as it is for major +## releases). If you make extensive local modifications to INN, you can +## put your own version information in $VERSION_EXTRA. If it's set to +## "CVS prerelease", the build time will be automatically included. + +VERSION = 2.4.5 +VERSION_EXTRA = + +## If you want to install INN relative to a root directory other than /, +## set DESTDIR to the path to the root directory of the file system. This +## won't affect any of the paths compiled into INN; it's used primarily +## when building a software distribution, where software has to be +## installed into some file system that will later be mounted as / on the +## final system. DESTDIR should have a trailing slash, as the trailing +## slash is not added automatically (in case someone wants to add a prefix +## that isn't just a parent directory). + +DESTDIR = +D = $(DESTDIR) + +## The absolute path to the top of the build directory, used to find the +## libraries built as part of INN. Using relative paths confuses libtool +## when linking the test suite. + +builddir = @builddir@ + +## Basic compiler settings. COPT is the variable to override on the make +## command line to change the optimization or add warning flags (such as +## -Wall). LFS_* is for large file support. All of INN is built with the +## large file support flags if provided. + +CC = @CC@ +COPT = @CFLAGS@ +GCFLAGS = $(COPT) -I$(top)/include @CPPFLAGS@ $(LFS_CFLAGS) + +BERKELEY_DB_CFLAGS = @BERKELEY_DB_CFLAGS@ + +LDFLAGS = @LDFLAGS@ $(LFS_LDFLAGS) @BERKELEY_DB_LDFLAGS@ +LIBS = @LIBS@ $(LFS_LIBS) + +LFS_CFLAGS = @LFS_CFLAGS@ +LFS_LDFLAGS = @LFS_LDFLAGS@ +LFS_LIBS = @LFS_LIBS@ + +PROF = -pg +PROFSUFFIX = _p +MAKEPROFILING = $(MAKE) COPT="$(COPT) $(PROF)" \ + LDFLAGS="$(LDFLAGS) $(PROF)" \ + LIBSUFFIX=$(PROFSUFFIX) + +## Used to support non-recursive make. This variable is set to the necessary +## options to the compiler to create an object file in a subdirectory. It +## should be used instead of -c -o $@ $< and may be replaced with code that +## calls mv, if the compiler doesn't support -c with -o. + +CCOUTPUT = @CCOUTPUT@ + +## Warnings to use with gcc. Default to including all of the generally +## useful warnings unless there's something that makes them unsuitable. In +## particular, the following warnings are *not* included: +## +## -ansi Requires messing with feature test macros. +## -pedantic Too much noise from embedded Perl. +## -Wtraditional We assume ANSI C, so these aren't interesting. +## -Wshadow Names like log or index are too convenient. +## -Wcast-qual Used for a while, but some casts are unavoidable. +## -Wconversion Too much unsigned to signed noise. +## -Wredundant-decls Too much noise from system headers. +## +## Some may be worth looking at again once a released version of gcc doesn't +## warn on system headers. The warnings below are in the same order as +## they're listed in the gcc manual. +## +## Add -g because when building with warnings one generally also wants the +## debugging information, and add -O because gcc won't find some warnings +## without optimization turned on. Add -DDEBUG=1 so that we'll also +## compile all debugging code and check it as well. + +WARNINGS = -g -O -DDEBUG=1 -Wall -W -Wendif-labels -Wpointer-arith \ + -Wbad-function-cast -Wcast-align -Wwrite-strings \ + -Wstrict-prototypes -Wmissing-prototypes -Wnested-externs + +## libtool support. Note that INN does not use Automake (and that +## retrofitting Automake is likely more work than it's worth), so +## libtool-aware rules have to be written by hand. + +LIBTOOL = @LIBTOOL@ +LIBTOOLCC = @LIBTOOLCC@ +LIBTOOLLD = @LIBTOOLLD@ +EXTOBJ = @EXTOBJ@ +EXTLIB = @EXTLIB@ + +LIBCC = $(LIBTOOLCC) $(CC) +LIBLD = $(LIBTOOLLD) $(CC) + +## INN libraries. Nearly all INN programs are linked with libinn, and any +## INN program that reads from or writes to article storage or overview is +## linked against libstorage. EXTSTORAGELIBS is for external libraries +## needed by libstorage. + +LIBINN = $(builddir)/lib/libinn$(LIBSUFFIX).$(EXTLIB) +LIBHIST = $(builddir)/history/libinnhist$(LIBSUFFIX).$(EXTLIB) +LIBSTORAGE = $(builddir)/storage/libstorage$(LIBSUFFIX).$(EXTLIB) +EXTSTORAGELIBS = @BERKELEY_DB_LIB@ + +DBMINC = @DBM_INC@ +DBMLIB = @DBM_LIB@ + +CRYPTLIB = @CRYPT_LIB@ +PAMLIB = @PAM_LIB@ +REGEXLIB = @REGEX_LIB@ +SHADOWLIB = @SHADOW_LIB@ + +## Embedding support. Additional flags and libraries used when compiling +## or linking portions of INN that support embedded interpretors, set by +## configure based on what interpretor embeddings are selected. + +PERLLIB = $(builddir)/lib/perl$(LIBSUFFIX).o @PERL_LIB@ +PERLINC = @PERL_INC@ + +PYTHONLIB = @PYTHON_LIB@ +PYTHONINC = @PYTHON_INC@ + +## OpenSSL support. Additional flags and libraries used when compiling or +## linking code that contains OpenSSL support, and the path to the OpenSSL +## binaries. + +SSLLIB = @SSL_LIB@ +SSLINC = @SSL_INC@ +SSLBIN = @SSL_BIN@ + +## SASL support. Additional flags and libraries used when compiling or +## linking code that contains SASL support. + +SASLLIB = @SASL_LIB@ +SASLINC = @SASL_INC@ + +## Kerberos support. Additional flags and libraries used when compiling or +## linking code that contains Kerberos support. If Kerberos libraries were +## compiled, KRB5_AUTH is also set to the name of the Kerberos v5 +## authenticator that should be compiled and installed. +KRB5LIB = @KRB5_LIB@ +KRB5INC = @KRB5_INC@ +KRB5_AUTH = @KRB5_AUTH@ + +## Missing functions. If non-empty, configure detected that your system +## was missing some standard functions, and INN will be providing its own +## replacements from the lib directory. + +LIBOBJS = @LIBOBJS@ + +## Paths to various standard programs used during the build process. +## Changes to this file will *not* be reflected in the paths compiled into +## programs; these paths are only used during the build process and for +## some autogenerated scripts. To change the compiled paths, see +## include/paths.h. You may also need to modify scripts/innshellvars*. + +AWK = @_PATH_AWK@ +COMPRESS = @COMPRESS@ +CTAGS = @CTAGS@ +GZIP = @GZIP@ +LEX = @LEX@ +PERL = @_PATH_PERL@ +RANLIB = @RANLIB@ +YACC = @YACC@ +UNCOMPRESS = @UNCOMPRESS@ + +FIXSCRIPT = $(top)/support/fixscript + +PERLWHOAMI = $(PERL) -e 'print scalar getpwuid($$>), "\n"' +WHOAMI = (whoami || /usr/ucb/whoami || $(PERLWHOAMI)) 2> /dev/null + +## Paths and command lines for programs used only by the maintainers to +## regenerate dependencies, documentation, and the like. + +MAKEDEPEND = $(top)/support/makedepend + +POD2MAN = pod2man -c 'InterNetNews Documentation' -r 'INN $(VERSION)' +POD2TEXT = pod2text -s -l + +## Installation directories. If any of the below are incorrect, don't just +## edit this file; these directories are substituted in all over the source +## tree by configure. Instead, re-run configure with the correct +## command-line flags to set the directories. Run configure --help for a +## list of supported flags. + +prefix = @prefix@ + +PATHNEWS = $(prefix) +PATHBIN = $(PATHNEWS)/bin +PATHDOC = @DOCDIR@ +PATHETC = @ETCDIR@ +PATHMAN = @mandir@ +PATHINCLUDE = @includedir@ +PATHLIB = @LIBDIR@ +PATHCONTROL = @CONTROLDIR@ +PATHFILTER = @FILTERDIR@ +PATHRUN = @RUNDIR@ +PATHLOG = @LOGDIR@ +PATHLOGOLD = $(PATHLOG)/OLD +PATHDB = @DBDIR@ +PATHSPOOL = @SPOOLDIR@ +PATHTMP = @tmpdir@ +PATHAUTH = $(PATHBIN)/auth +PATHAUTHRESOLV = $(PATHAUTH)/resolv +PATHAUTHPASSWD = $(PATHAUTH)/passwd +PATHRNEWS = $(PATHBIN)/rnews.libexec +PATHARCHIVE = $(PATHSPOOL)/archive +PATHARTICLES = $(PATHSPOOL)/articles +PATHINCOMING = $(PATHSPOOL)/incoming +PATHTAPE = $(PATHSPOOL)/innfeed +PATHINBAD = $(PATHINCOMING)/bad +PATHOVERVIEW = $(PATHSPOOL)/overview +PATHOUTGOING = $(PATHSPOOL)/outgoing + +MAN1 = @mandir@/man1 +MAN3 = @mandir@/man3 +MAN5 = @mandir@/man5 +MAN8 = @mandir@/man8 + +## Installation settings. The file installation modes are determined by +## configure; inews and rnews are special and have configure flags to +## control how they're installed. See INSTALL for more information. + +NEWSUSER = @NEWSUSER@ +NEWSGROUP = @NEWSGRP@ + +INEWSMODE = @INEWSMODE@ +RNEWSMODE = @RNEWSMODE@ +FILEMODE = @FILEMODE@ + +OWNER = -o $(NEWSUSER) -g $(NEWSGROUP) +ROWNER = -o $(NEWSUSER) -g @RNEWSGRP@ + +INSTALL = $(top)/support/install-sh -c + +## Installation commands. These commands are used by the installation rules +## of each separate subdirectory. The naming scheme is as follows: the first +## two characters are CP (indicating a plain copy) or LI (indicating an +## installation that goes through libtool). After an underscore is a +## one-character indicator of the file type (R for a regular file, X for an +## executable, S for a setuid root executable) and then PUB for a +## world-readable/world-executable file or PRI for a group-readable/ +## group-executable file (only the news group). +## +## inews and rnews have their own special installation rules, as do database +## files like active and newsgroups that should have the same permissions as +## article files. + +LI_SPRI = $(LIBTOOL) $(INSTALL) -o root -g $(NEWSGROUP) -m 4550 -B .OLD +LI_XPRI = $(LIBTOOL) $(INSTALL) $(OWNER) -m 0550 -B .OLD +LI_XPUB = $(LIBTOOL) $(INSTALL) $(OWNER) -m 0555 -B .OLD + +LI_INEWS = $(LIBTOOL) $(INSTALL) $(OWNER) -m $(INEWSMODE) -B .OLD +LI_RNEWS = $(LIBTOOL) $(INSTALL) $(ROWNER) -m $(RNEWSMODE) -B .OLD + +CP_RPRI = $(INSTALL) $(OWNER) -m 0640 -B .OLD +CP_RPUB = $(INSTALL) $(OWNER) -m 0644 -B .OLD +CP_XPRI = $(INSTALL) $(OWNER) -m 0550 -B .OLD +CP_XPUB = $(INSTALL) $(OWNER) -m 0555 -B .OLD + +CP_DATA = $(INSTALL) $(OWNER) -m $(FILEMODE) -B .OLD + +## How to install man pages. Pick one of SOURCE, BSD4.4, NROFF-PACK, or +## NROFF-PACK-SCO. Used by doc/man/putman.sh; read that script for more +## information on what it does. + +MANPAGESTYLE = SOURCE + +## Some additional definitions needed by some versions of make, to ensure a +## consistant set of variables are available. + +SHELL = /bin/sh + +@SET_MAKE@ diff --git a/NEWS b/NEWS new file mode 100644 index 0000000..2ad3e94 --- /dev/null +++ b/NEWS @@ -0,0 +1,860 @@ +Changes in 2.4.5 + + * Fixed the "alarm signal" around "SSL_read" in nnrpd: it allows a + proper disconnection of news clients which were previously hanging + when posting an article through a SSL connection. Moreover, the + *clienttimeout* parameter now works on SSL connections. Thanks to + Matija Nalis for the patch. + + * SO_KEEPALIVE is now implemented for SSL TCP connections on systems + which support it, allowing system detection and closing the dead TCP + SSL connections automatically after system-specified time. Thanks to + Matija Nalis for the patch. + + * Fixed a segmentation fault when an article of a size greater than + remaining stack is retrieved via SSL. Thanks to Chris Caputo for this + patch. + + * Fixed a few segfaults and bugs which affected both Python innd and + nnrpd hooks. They no longer check the existence of methods not used + by the hooked script. An issue with Python exception handling was + also fixed, as well as a segfault fixed by Russ Allbery which happened + whenever one closes and then reopens Python in the same process. + Julien Elie also fixed a bug when reloading Python filters (they were + not always correctly reloaded) and a segfault when generating access + groups with embedded Python filters for nnrpd. Many thanks to David + Hlacik for its bug reports. + + * The nnrpd.py stub file in order to test Python nnrpd hooks, as + mentioned in their documentation, is now installed; only INN.py was + previously installed in *pathfilter*. Also fixed a bug in INN.py and + add missing methods to it. + + * Fixed a long-standing bug in innreport which prevented it from + correctly reporting nnrpd and innfeed log messages. + + * Fixed a hang in Perl hooks on (at least) HP/PA since Perl 5.10. + + * Fixed a compilation problem on some platforms because of AF_INET6 + which was not inside a HAVE_INET6 block in innfeed. + + * Fixed a bug in innfeed which contained thrice the same IPs for each + peer; it unnecessarily slowed the peer IP rotation for innfeed. + Thanks, D. Stussy, for having seen that. Miquel van Smoorenburg + provided the patch. + + * A new *heavily* improved version of pullnews is shipped with this INN + release. This new version is provided by Geraint Edwards. He added + no more than 16 flags, fixed some bugs and integrated the backupfeed + contrib script by Kai Henningsen, adding again 6 other flags. A + long-standing but very minor bug in the -g option was especially fixed + and items from the to-do list implemented. Many thanks again to + Geraint Edwards. + + * New headers are accessible through Perl and Python innd filtering + hooks. You will find the exact list in the INN Python Filtering and + Authentication Hooks documentation (doc/hook-python) and in Python + samples. Thanks to Matija Nalis for this addition of new useful + headers. + + * New samples for Python nnrpd hooks are shipped with INN: + nnrpd_access.py for access control and nnrpd_dynamic.py for dynamic + access control. The nnrpd_auth.py script is now only used for + authorization control. See the readers.conf man page for more + information (especially the *python_auth*, *python_access* and + *python_dynamic* parameters). The documention about INN Python + Filtering and Authentication Hooks has also been improved by Julien + Elie. + +Changes in 2.4.4 + + * Fixed incomplete checking of packet sizes in the ctlinnd interface in + the no-Unix-domain-sockets case. This is a potential buffer overflow + in dead code since basically all systems INN builds on support Unix + domain sockets these days. Also track the buffer size more correctly + in the client side of this interface for the Unix domain socket case. + + * Group blocks in incoming.conf are now correctly parsed and no longer + cause segfaults when loading this file. + + * Fixed a problem with innfeed continuously segfaulting on amd64 + hardware (and possibly on lots of 64-bit platforms). Many thanks to + Ollivier Robert for his patch and also to Kai Gallasch for having + reported the problem and provided the FreeBSD server to debug it. + + * scanlogs now rotates innfeed's log file, which prevents innfeed from + silently dying when its log file reaches 2 GB. + + * Perl 5.10 support has been added to INN thanks to Jakub Bogusz. + + * Some news clients hang when posting an article through a SSL + connection: it seems that nnrpd's SSL routines make it wrongly wait + for data completion. In order to fix the problem, the select() wait + is now just bypassed. However, the IDLE timer stat is currently not + collected for such connections. Thanks to Kachun Lee for this + workaround. + + * Fixed a bug in the display of the used compressor ("cunbatch" was used + if arguments were passed to gzip or bzip2). + + * Fixed a bug in mailpost and pullnews which prevented useful error + messages to be seen. Also add the -x flag to pullnews in order to + insert Xref: headers in articles which lack one. + + * If compiling with Berkeley DB, use its ndbm compatibility layer for + ckpasswd in preference to searching for a traditional dbm library. + INN also supports Berkeley DB 4.4, 4.5 and 4.6 thanks to Marco d'Itri. + + * ovdb_init now properly closes stdin/out/err when it becomes a daemon. + The issue was reported by Viktor Pilpenok and fixed by Marco d'Itri. + + * Added support for Diablo quickhash and hashfeed algorithms. It allows + to distribute the messages among several peers (new Q flag for + newsfeeds). Thanks to Miquel van Smoorenburg for this implementation + in INN. + + * innd now listen on separate sockets for IPv4 and IPv6 connections if + the IPV6_V6ONLY socket option is available. There might also be + operating systems that still have separate IPv4 and IPv6 TCP + implementations, and advanced features like TCP SACK might not be + available on v6 sockets. Thanks to Miquel van Smoorenburg for this + patch. + + * The two configuration options *bindaddress* and *bindaddress6* can now + be set on a per-peer basis for innfeed. Setting *bindaddress6* to + "none" tells innfeed to never attempt an IPv6 connection to that host. + Thanks to Miquel van Smoorenburg for this patch. + + * Added a *nnrpdflags* parameter to inn.conf (modeled on the concept of + *innflags*) to permit passing of command line arguments to instances + of nnrpd spawned from innd. + + * A new inn.conf parameter called *pathcluster* has been added: it + allows to append a common name to the Path: header on all incoming + articles. *pathhost* and *pathalias* (if set) are still appended to + the path as usual, but *pathcluster* is always appended as the last + element (e.g. on the leftmost side of the Path: header). Thanks to + Miquel van Smoorenburg for this feature. + + * simpleftp has been rewritten to use "Net::FTP". Indeed, ftp.pl is no + longer shipped with Perl 5 and the script did not work. + + * perl-nocem will now check for a timeout and re-open the socket if + required. Additionally, perl-nocem will switch to cancel_ctlinnd in + case cancel_nntp fails after sending the Message-ID. Thanks to + Christoph Biedl for the patch. A more detailed documentation has also + been written for perl-nocem(8). + + * The RADIUS configuration is now wrapped in a "server {}" block in + radius.conf. + + * Checkgroups when there is nothing to change no longer result in + sending a blank mail to administrators. Besides, no mail is sent by + controlchan for the creation of a newsgroup when the action is "no + change". + + * Checkgroups are now properly propagated even though the news server + does not carry the groups they are posted to. + + * controlchan and docheckgroups now handle wire format messages so that + articles from the spool can be directly fed to them. + + * Newgroup control messages for existing groups now change their + description. If a mail is sent to administrators, it reminds them to + update their newsgroups file. It also warns when there are missing or + obsolete descriptions. Furthermore, the newsgroups file is now + written prettier (from one to three tabulations between the name of + the group and its short description) and to.* groups cannot be + created. + + * The sample control.ctl file has been extensively updated. + + * Fixed empty LISTGROUP replies which were not terminated. Thanks to + David Canzi for the patch. + + * In response to a LIST [file] command, if the file does not exist, we + assume it is not maintained and return 503 instead of 215 and an empty + file. Moreover, capability to LIST ACTIVE.TIMES for a wildmat pattern + as its third argument has been added in order to select wanted + newsgroups. + + * inews now tries to authenticate if it does not receive a 200 return + code after MODE READER. Indeed, it might be able to post even with a + 201 return code and also with another codes like 440 or 480. + + * If creating a new history file, set the ownership and mode + appropriately. inncheck also expects fewer things to be private to + the news user. Most of the configuration files will never contain + private information like passwords. + + * Other minor bug fixes and documentation improvements. + +Changes in 2.4.3 + + * Previous versions of INN had an optimization for handling XHDR + Newsgroups that used the Xref: header from overview. While this does + make the command much faster, it doesn't produce accurate results and + breaks the NNTP protocol, so this optimization has been removed. + + * Fixed a bug in innd that allowed it to accept articles with duplicated + headers if the header occurred an odd number of times. Modified the + programs for rebuilding overview to use the last Xref: header if there + are multiple ones to avoid problems with spools that contain such + invalid articles. + + * Fixed yet another problem with verifying that a user has permissions + to approve posts to a moderated group. Thanks, Jens Schlegel. + + * Increase the send and receive buffer on the Unix domain socket used by + ctlinnd. This should allow longer replies (particularly for innstat) + on platforms with very low default Unix domain socket buffer sizes. + + * rnews's handling of articles with nul characters, NNTP errors, header + problems, and deferrals has been significantly improved. + + * Thomas Parmelan added support to send-uucp for specifying the funnel + or exploder site to flush for feeds managed through one and fixed a + problem with picking up old stranded work files. + + * Many other more minor bug fixes, optimization improvements, and + documentation fixes. + +Changes in 2.4.2 + + * INN is now licensed under a less restrictive license (about as + minimally restrictive as possible shy of public domain), and the + clause similar to the old BSD advertising clause has been dropped. + + * "make install" and "make update" now always install the newly built + binaries, rather than only installing them if the modification times + are newer. This is the behavior that people expect. "make install" + now also automatically builds a new (empty) history database if one + doesn't already exist. + + * The embedded Tcl filter code has been disabled (and will be removed + entirely in the next major release of INN). It hasn't worked for some + time and causes innd crashes if compiled in (even if not used). If + someone wants to step forward and maintain it, I recommend starting + from scratch and emulating the Perl and Python filters. + + * ctlinnd should now successfully handle messages from INN up to the + maximum allowable packet size in the protocol, fixing problems sites + with many active peers were having with innstat output. + + * Overview generation has been fixed in both makehistory and innd to + follow the rules in the latest NNTP draft rather than just replacing + special characters with spaces. This means that the unfolding of + folded header lines will not introduce additional, incorrect + whitespace in the overview data. + + * nnrpd now uniformly responds with a 480 or 502 status code to attempts + to read a newsgroup to which the user does not have access, depending + on whether the user has authenticated. Previously, it returned a 411 + status code, claiming the group didn't exist, which confuses the + reactive authentication capability of news readers. + + * If a user is not authorized to approve articles (using the "A" + *access* control in readers.conf), articles that include Approved: + headers will be rejected even if posted to unmoderated groups. Some + other site may consider that group to be moderated. + + * The configuration parser used for readers.conf and others now + correctly handles "#" inside quoted strings and is more robust against + unmatched double quotes. + + * Messages mailed to moderators had two spaces after the colons in the + headers, rather than one. This bug has been fixed. + + * A bug that could cause heap corruption and random crashes in innd if + INN were compiled with Python support has been fixed. + + * Some problems with innd's tracking of article size and enforcement of + the configured maximum article size have been fixed. + + * pgpverify will now correctly verify signatures generated by GnuPG and + better supports GnuPG as the PGP implementation. + + * INN's code should now be more 64-bit clean in its handling of size_t, + pointer differences, and casting of pointers, correcting problems that + showed up on 64-bit platforms like AMD64. + + * Improved the error reporting in the history database code, in inews, + in controlchan, and in expire. + + * Many other, more minor bugs have also been fixed. + +Changes in 2.4.1 + + * SECURITY: Handle the special filing of control messages into per-type + newsgroups more robustly. This closes a potentially exploitable + buffer overflow. Thanks to Dan Riley for his excellent bug report. + + * Fixed article handling in innd so that articles without a Path: header + (arising from peers sending malformatted articles or injecting + malformatted articles through rnews) would not cause innd to crash. + (This was not exploitable.) + + * Fixed a serious bug in XPAT handling, thanks to Tommy van Leeuwen. + + * "configure" now looks for sendmail only in /usr/sbin and /usr/lib, not + on the user's path. This should reduce the need for --with-sendmail + if your preferred sendmail is in a standard location. + + * The robustness of the tradindexed overview method has been further + increased, handling more edge cases arising from corrupted databases + and oddly-named newsgroups. + + * innd now never decreases the high water mark of a newsgroup when + renumbering, which should help ameliorate overview and active file + synchronization problems. + + * Do not close and reopen the history file on ctlinnd reload when the + server is paused or throttled. This was breaking ctlinnd reload all + during a server pause. + + * Various minor portability and compilation issues fixed. Substantial + numbers of compiler warnings have been cleaned up, thanks largely to + work by Ilya Kovalenko. + + * Multiple other more minor bugs have been fixed. + + * Documentation and man pages have been clarified and updated. + +Upgrading from 2.3 to 2.4 + + The inn.conf parser has changed between INN 2.3 and 2.4. Due to that + change, options in inn.conf that contain whitespace or a few other + special characters must be quoted with double quotes, and empty + parameters (parameters with no value) are not allowed. INN 2.4 comes + with a script, innupgrade, run automatically during "make update", that + will attempt to fix any problems that it finds with your inn.conf file, + saving the original as inn.conf.OLD. + + This change is the beginning of standardization of parsing and syntax + across all of INN's configuration files. + + The history subsystem now has a standard API that allows other backends + to be used. Because of this, you now need to specify the history method + in inn.conf. Adding: + + hismethod: hisv6 + + will tell INN to use the same history backend as was used in previous + versions. innupgrade should take care of this for you. + + ovdb is known to have some locking and timing issues related to how + nnrpd shuts down (or fails to shut down) the overview databases. If you + have stability problems with ovdb, try setting *readserver* to "true" in + ovdb.conf. This will funnel all ovdb reads through a single process + with a cleaner interface to the underlying Berkeley DB database. + + If you use Perl authentication for nnrpd (if *nnrpdperlauth* in inn.conf + is "true"), there have been major changes. See "Changes to Perl + Authentication Support for nnrpd" in doc/hook-perl for details. + + Similarly, if you use Python authentication for nnrpd (if + *nnrpdpythonauth* in inn.conf is "true"), there have been major changes. + See "Changes to Python Authentication and Access Control Support for + nnrpd" in doc/hook-python for details. + + If you use send-uucp, it has been completely rewritten and now takes a + configuration file to specify its behavior. See its man page for more + information. If you use sendbatch, it is no longer included in INN + since the new send-uucp can handle all of the same functionality. + + The wildmat API has been renamed (to uwildmat and friends; see + uwildmat(3) for the interfaces) to distinguish it from Rich $alz's + original version, since it now supports UTF-8. This may require changes + in other software packages that link against INN's libraries. + + If you are upgrading from a version prior to INN 2.3, see "Upgrading + from 2.2 to 2.3". + +Changes in 2.4.0 + + * IPv6 support has been added, disabled by default. If you have IPv6 + connectivity, build with --enable-ipv6 to try it. There are no known + bugs, but please report any problems you find (or even successes, if + you use an unusual platform). There are a few changes of interest; + further information is available in doc/IPv6-info. + + * The tradindexed overview method has been completely rewritten and + should be considerably more robust in the face of system crashes. A + new utility, tdx-util, is provided to examine the contents of the + overview database, repair inconsistencies, and rebuild the overview + for particular groups from a tradspool news spool. See tdx-util(8) + for more details. + + * The Perl and Python authentication hooks for readers have been + extensively overhauled and integrated better with readers.conf. See + the Changes sections in doc/hook-perl and doc/hook-python for more + details. + + * nnrpd now optionally supports article injection via IHAVE, see + readers.conf(5). Any articles injected this way must have Date, From, + Message-ID, Newsgroups, Path, and Subject headers. X-Trace and + X-Complaints-To headers will be added if the appropriate options are + set in readers.conf, but other headers will not be modified/inserted + (e.g. NNTP-Posting-Host, NNTP-Posting-Date, Organization, Lines, Cc, + Bcc, and To headers). + + * nnrpd now handles arbitrarily long lines in POST and IHAVE; + administrators who want to limit the length of lines in locally posted + articles will need to add this to their local filters instead. + + * nnrpd no longer handles the poorly-specified RFC 977 optional fourth + argument to the NEWGROUPS command specifying the "distributions" that + the command was supposed to apply to. + + Clients that use that argument will break. There are not believed to + be any such clients, and it's easy enough to just filter the returned + list of newsgroups (which is generally fairly short) to achieve the + same results. + + * nnrpd no longer accepts UTC as a synonym for GMT for NEWGROUPS or + NEWNEWS. This usage was never portable, and was rejected by the NNTP + working group. It is being removed now in the hope that it will be + caught before anyone starts to rely on it. + + * innfeed supports a new peer parameter, *backlog-feed-first*, that if + set to "true" feeds any backlog to a peer before new articles, see + innfeed.conf(5). When used in combination with *max-connections* set + to 1, this can be used to enforce in-order delivery of messages to a + peer that is doing Xref slaving, avoiding cases where a + higher-numbered message is received before a lower-numbered message in + the same group. + + * Several other, more minor protocol issues have been fixed: + connections rejected due to the connection rate limiting in innd + receive 400 replies instead of 504 or 505, and ARTICLE without an + argument will always either retrieve the current article or return a + 423 error, never advance the current article number to the next valid + article. + + See doc/compliance-nntp for all of the known issues with INN's + compliance with the current NNTP draft. + + * All accesses to the history file for all parts of INN now go through a + generic API like the storage and overview subsystems do. This will + eventually allow new history implementations to be dropped in without + affecting the rest of INN, and will significantly improve the + encapsulation of the history subsystem. See the libinnhist(3) man + page for the details of the interface. + + * INN now uses a new parser for the inn.conf file. This means that + parameters containing whitespace or other special characters must now + be quoted; see inn.conf(5). It fixes the long-standing bug that + certain values must be included in inn.conf even if using the defaults + for the use of shell or Perl scripts, and it will serve as the basis + for standardizing and cleaning up the configuration file parsing in + other parts of INN. innupgrade is run during "make update" and should + convert an existing inn.conf file for you. + + * send-uucp has been replaced by a completely rewritten version from + Marco d'Itri, Edvard Tuinder, and Miquel van Smoorenburg, which uses a + configuration file that specifies batch sizes, compression methods, + and hours during which batches should be generated. The old sendbatch + script has been retired, since send-uucp can now handle everything + that it did. + + * Two "configure" options have changed names: --with-tmp-path is now + --with-tmp-dir, and --with-largefiles is now --enable-largefiles, to + improve consistency and better match the "autoconf" option guidelines. + + * Variables can now be used in the newsfeeds file to make it easier to + specify many similar feeds or feed patterns. See the newsfeeds(5) man + page for details. + + * Local connections to INN support a new special mode, MODE CANCEL, that + allows efficient batch cancellation of messages. This is intended to + be the preferred interface for external spam and abuse filters like + NoCeM. See "CANCEL FEEDS" in innd(8) for details. + + * Two new options, *nfsreader* and *nfswriter*, have been added to + inn.conf to aid in building NFS based shared reader/writer platforms. + On the writer server configure *nfswriter* to "true" and on all of the + readers configure *nfsreader* to "true"; these options add calls to + force data out to the NFS server and force it to be read directly from + the NFS server at the appropriate moments. Note that it has only been + tested on Solaris 8, using CNFS as the storage mechanism and + tradindexed as the overview method. + + * A new option, *tradindexedmmap*, has been added to inn.conf. If set + to "true" (the default), then the tradindexed overview method will use + mmap() to access its overview data (in 2.3 you couldn't control this; + it always used mmap). + + * Thanks to code contributed by CMU, innfeed can now feed an IMAP server + as well as other NNTP servers. See the man page for innfeed(8) for + more information. + + * An authenticator, auth_smb, that checks a username and password + against a remote Samba server is now included. See auth_smb(8) for + details. + + * The wildmat functions in INN now support UTF-8, in a way that should + allow them to still work with most simple 8-bit character sets in + widespread use. As part of this change, some additional wildmat + interfaces are now available and the names have changed (to uwildmat, + where "u" is for Unicode). See uwildmat(3) for the details. + + * The interface between external authenticators and nnrpd is now + properly documented, in doc/external-auth. A library implementing + this interface in C is provided, which should make it easier to write + additional authenticators resolvers. See libauth(3) for details, and + any of the existing programs in authprogs/ for examples. + + * Most (if not all) of the temporary file creation in INN now uses + functions that create temporary files properly and safely. + +Changes in 2.3.5 + + * Clients using POST are no longer permitted to provide an + Injector-Info: header. + + * Fixed a bug causing posts with Followup-To: set to a moderated group + to be rejected if the posting user didn't have permission to approve + postings. + + * Fixed bugs in inncheck with setuid rnews or setgid inews, in + *innconfval* with inn.conf parameters containing shell metacharacters + but no spaces, and in parsedate.y with some versions of yacc. Fixed a + variety of size-related printf format warnings (e.g., %d vs. %ld) + thanks to the work of Winfried Szukalski. + +Changes in 2.3.4 + + * LIST ACTIVE no longer returns data when given a single group argument + if the client is not authorized to read that group. + + * XHDR and XPAT weren't correctly parsing article headers, resulting in + searches for the header "newsgroup" matching the header "newsgroups". + + * Made CNFS more robust against crashes by actually syncing the cycbuff + headers to disk as was originally intended. Fixed a memory leak in + the tradspool code. + + * Two bugs in pgpverify when using GnuPG were fixed: it now correctly + checks for gpgv (rather than pgp) when told to use GnuPG and expects + the keyring to be pubring.gpg (not pubring.pgp). + + * Substantial updates to the sample provided control.ctl file. + + * Compilation fixes with Perl 5.8.0, Berkeley DB 4.x, current versions + of Linux (including with large file support), and Tru64. inndf fixes + for ReiserFS. + + * Various bugs in the header handling in nnrpd have been fixed, + including hangs when using virtual domains and improper processing of + folded headers under certain circumstances. + + * Other minor bug fixes and documentation improvements. + +Changes in 2.3.3 + + * pgpverify now supports using GnuPG to check signatures (rather than + PGP) without the pgpgpg wrapper. GnuPG can check both old-style RSA + signatures and new OpenPGP signatures and is recommended over PGP 2.6. + If you have GnuPG installed, pgpverify will use it rather than PGP, + which means that you may have to create a new key ring for GnuPG to + use to verify signatures if you were previously using PGP. + + * Users can no longer post articles containing Approved: headers to + moderated groups by default; they must be specifically given that + permission with the *access* parameter in readers.conf. See the man + page for more details. + + * Two bugs in repacking overview index files and a reliability bug with + writing overview data were all fixed in the tradindexed overview + method, hopefully making it somewhat more reliable, particularly for + makehistory. + + * If rc.news.local exists in the INN binary directory, it will be run + with the start or stop argument whenever rc.news is run. This is + available as a hook for local startup and shutdown code. + + * The default history table hash sizes were increased because a + too-small value can cause serious performance problems (whereas a + too-large hash just wastes a bit of disk space). + + * The sample control.ctl file has been extensively updated. + + * Wildmat exclusions ("@" and "!") should now work properly in + storage.conf newsgroup patterns. + + * The implementation of the -w flag for expireover was fixed; + previously, the value given to -w to change expireover's notion of the + current time was scaled by too much. + + * Various other more minor bug fixes, standards compliance fixes, and + documentation improvements. + +Changes in 2.3.2 + + * innxmit can again handle regular filenames as input as well as storage + API tokens (allowing it to be used to import an old traditional + spool). + + * Several problems with tagged-hash history files have been fixed thanks + to the debugging efforts of Andrew Gierth and Sang-yong Suh. + + * A very long-standing (since INN 1.0!) NNTP protocol bug in nnrpd was + fixed. The response to an ARTICLE command retrieving a message by + Message-ID should have the Message-ID as the third word of the + response, not the fourth. Fixing this is reported to *possibly* cause + problems with some Netscape browsers, but other news servers correctly + follow the protocol. + + * Some serious performance problems with expiration of tradspool should + now be at least somewhat alleviated. tradspool and timehash now know + how to output file names for removal rather than tokens, and fastrm's + ability to remove regular files has been restored. This should bring + expiration times for tradspool back to within a factor of two of + pre-storage-API expiration times. + + * Added a sample subscriptions file and documentation for it and + innmail. + +Changes in 2.3.1 + + * inews no longer downloads the active file, no longer tries to send + postings to moderated groups to the moderator directly, and in general + duplicates less of the functionality of nnrpd, instead letting nnrpd + handle it. This fixes the problem of inews not working properly for + users other than news without being setgid. + + * Added a man page for ckpasswd. + + * A serious bug in the embedded Perl authentication hooks was fixed, + thanks to Jan Rychter. + + * The annoying compilation problem with embedded Perl filtering on Linux + systems without libgdbm installed should be fixed. + + * INN now complains loudly at "configure" time if the configured path + for temporary files is world-writeable, since this configuration can + be a security hole. + + * Many other varied bug fixes and documentation fixes of all sorts. + +Upgrading from 2.2 to 2.3 + + There may be additional things to watch out for not listed here; if you + run across any, please let know about them. + + Simply doing a "make update" is not sufficient to upgrade; the history + and overview information will also have to be regenerated, since the + formats of both files have changed between 2.2 and 2.3. Regardless of + whether you were using the storage API or traditional spool under 2.2, + you'll need to rebuild your overview and history files. You will also + need to add a storage.conf file, if you weren't using the storage API + under INN 2.2. A good default storage.conf file for 2.2 users would be: + + method tradspool { + newsgroups: * + class: 0 + } + + Create this storage.conf file before rebuilding history or overview. + + If you want to allow readers, or if you want to expire based on + newsgroup name, you need to tell INN to generate overview data and pick + an overview method by setting *ovmethod* in inn.conf. See INSTALL and + inn.conf(5) for more details. + + The code that generates the dbz index files has been split into a + separate program, makedbz. makehistory still generates the base history + file and the overview information, but some of its options have been + changed. To rebuild the history and overview files, use something like: + + makehistory -b -f history.n -O -T /usr/local/news/tmp -l 600000 + + (change the /usr/local/news/tmp path to some directory that has plenty + of temporary space, and leave off -O if you're running a transit-only + server and don't intend to expire based on group name, and therefore + don't need overview.) Or if your overview is buffindexed, use: + + makehistory -b -f history.n -O -F + + Both will generate a new history file as history.n and rebuild overview + at the same time. If you want to preseve a record of expired + Message-IDs in the history file, run: + + awk 'NF==2 { print; }' < history >> history.n + + to append them to the new history file you created above. Look over the + new history file and make sure it looks right, then generate the new + index files and move them into place: + + makedbz -s `wc -l < history.n` -f history.n + mv history.n history + mv history.n.dir history.dir + mv history.n.hash history.hash + mv history.n.index history.index + + (Rather than .hash and .index files, you may have a .pag file if you're + using tagged hash.) + + For reader machines, nnrp.access has been replaced by readers.conf. + There currently isn't a program to convert between the old format and + the new format (if you'd like to contribute one, it would be welcomed + gratefully). The new file is unfortunately considerably more complex as + a result of its new capabilities; please carefully read the example + readers.conf provided and the man page when setting up your initial + configuration. The provided commented-out examples cover the most + common installation (IP-based authentication for all machines on the + local network). + + INN makes extensive use of mmap(2) for the new overview mechanisms, so + at the present time NFS-mounting the spool and overview on multiple + reader machines from one central server probably isn't feasible in this + version. mmap tends to interact poorly with NFS (at the least, NFS + clients won't see updates to the mapped files in situations where they + should). (The preferred way to fix this would, rather than backing out + the use of mmap or making it optional, to add support for Diablo-style + header feeds and pull-on-demand of articles from a master server.) + + The flags for overchan have changed, plus you probably don't want to run + overchan at all any more. Letting innd write overview data itself + results in somewhat slower performance, but is more reliable and has a + better failure mode under high loads. Writing overview data directly is + the default, so in a normal upgrade from 2.2 to 2.3 you'll want to + comment out or remove your overchan entry in newsfeeds and set + *useoverchan* to "false" in inn.conf. + + crosspost is no longer installed, and no longer works (even with + traditional spool). If you have an entry for crosspost in newsfeeds, + remove it. + + If you're importing a traditional spool from a pre-storage API INN + server, it's strongly recommended that you use NNTP to feed the articles + to your new server rather than trying to build overview and history + directly from the old spool. It's more reliable and ensures that + everything gets put into the right place. The easiest way to do this is + to generate, on your old server, a list of all of your existing article + files and then feed that list to innxmit. Further details can be found + in the FAQ at . + + If you are using a version of Cleanfeed that still has a line in it + like: + + $lines = $hdr{'__BODY__'} =~ tr/\n/\n/; + + you will need to change this line to: + + $lines = $hdr{'__LINES__'}; + + to work with INN 2.3 or later. This is due to an internal optimization + of the interface to embedded filters that's new in INN 2.3. + +Changes in 2.3.0 + + * New readers.conf file (replaces nnrp.access) which allows more + flexible specification of access restrictions. Included in the sample + implementations is a RADIUS-based authenticator. + + * Unified overview has been replaced with an overview API, and there are + now three separate overview implementations to choose from. One + (tradindexed) is very like traditional overview but uses an additional + index file. The second (buffindexed) uses large buffers rather than + separate files for each group and can handle a higher incoming article + rate while still being fast for readers. The third (ovdb) uses + Berkeley DB to store overview information (so you need to have + Berkeley DB installed to use it). The *ovmethod* key in inn.conf + chooses the overview method to use. + + Note that ovdb has not been as widely tested as the other overview + mechanisms and should be considered experimental. + + * All article storage and retrieval is now done via the storage API. + Traditional spool is now available as a storage type under the storage + API. (Note that the current traditional spool implementation causes + nightly expire to be extremely slow for a large number of articles, so + it's not recommended that you use the tradspool storage method for the + majority of a large spool.) + + * The timecaf storage method has been added, similar to timehash but + storing multiple articles in a single file. See INSTALL for details + on it. + + * INN now supports embedded Python filters as well as Perl and Tcl + filters, and supports Python authentication hooks. + + * There is preliminary support for news reading over SSL, using OpenSSL. + + * To simplify anti-abuse filtering, and to be more compliant with news + standards and proposed standards, INN now treats as control messages + only articles containing a Control: header. A Subject: line beginning + with "cmsg " is no longer sufficient for a message to be considered a + control message, and the Also-Control: header is no longer supported. + + * The INN build system no longer uses subst. (This will be transparent + to most users; it's an improvement and modernization of how INN is + configured.) + + * The build and installation system has been substantially overhauled. + "make update" now updates scripts as well as binaries and + documentation, there is better support for parallel builds ("make + -j"), there is less "make" recursion, and far more of the + system-dependent configuration is handled directly by "autoconf". + libtool build support (including shared library support) should be + better than previous releases. + +Changes in 2.2.3 + + * inews is not installed setgid news and rnews is not installed setuid + root by default any more. If you need the old permissions, you have + to give a flag to configure. See INSTALL for more details. + + * Fixed a security hole when *verifycancels* was enabled in inn.conf + (not the default). + + * Message-IDs are now limited to 250 octets to prevent interoperability + problems with other servers. + + * Embedded Perl filters now work with Perl 5.6.0. + + * Lots of bug fixes and changes for security paranoia. + +Changes in 2.2.2 + + * Various minor bug fixes and a Y2K bug fix. The Y2K bug is in version + version 2.2.1 only and will show up after Jan 1st, 2000 when a news + reader issues a NEWNEWS command for a date prior to the year 2000. + +Changes in 2.2.1 + + * Various bug fixes, mostly notably fixes for potential buffer overflow + security vulnerabilities. + +Changes in 2.2.0 + + * New storage.conf file (replaces storage.ctl). + + * New (optional) way of handling non-cancel control messages + (controlchan) that serializes them and prevents server overload from + control message storms. + + * Support for actsyncd to fetch active file with ftp; configured by + default to use if you + run actsyncd. Be sure to read the manual page for actsync to + configure an actsync.ign file for your site, and test simpleftp if you + do not "configure" with wget or ncftp. Also see + . + + * Some options to "configure" are now moved to inn.conf + (*merge-to-groups* and *pgp-verify*, without the hyphen). + + * inndf, a portable version of df(1), is supplied. + + * New cnfsstat program to show stats of CNFS buffers. + + * news2mail and mailpost programs for gatewaying news to mail and mail + to news are supplied. + + * pullnews program for doing a sucking feed is provided (not meant for + large feeds). + + * The innshellvars.csh.in script is obsolete (and lives in the obsolete + directory, for now). + diff --git a/README b/README new file mode 100644 index 0000000..722abc5 --- /dev/null +++ b/README @@ -0,0 +1,288 @@ +Welcome to INN 2.4! + + This work is sponsored by Internet Systems Consortium. + + Please see INSTALL for installation instructions, NEWS for what's + changed from the previous release, and LICENSE for the copyright, + license, and distribution terms. + +What is INN? + + INN (InterNetNews), originally written by Rich Salz, is an extremely + flexible and configurable Usenet / netnews news server. For a complete + description of the protocols behind Usenet and netnews, see RFC 1036 and + RFC 977 (or their replacements). In brief, netnews is a set of + protocols for exchanging messages between a decentralized network of + news servers. News articles are organized into newsgroups, which are + themselves organized into hierarchies. Each individual news server + stores locally all articles it has received for a given newsgroup, + making access to stored articles extremely fast. Netnews does not + require any central server; instead, each news server passes along + articles it receives to all of the news servers it peers with, those + servers pass the articles along to their peers, and so on, resulting in + "flood fill" propagation of news articles. + + A news server performs three basic functions: it accepts articles from + other servers and stores them on disk, sends articles it has received + out to other servers, and offers stored news articles to readers on + demand. It additionally has to perform some periodic maintenance tasks, + such as deleting older articles to make room for new ones. + + Originally, a news server would just store all of the news articles it + had received in a file system. Users could then read news by reading + the article files on disk (or more commonly using news reading software + that did this efficiently). These days, news servers are almost always + stand-alone systems and news reading is supported via network + connections. A user who wants to read a newsgroup opens that newsgroup + in their newsreader software, which opens a network connection to the + news server and sends requests for articles and related information. + The protocol that a newsreader uses to talk to a news server and that a + news server uses to talk to another news server over TCP/IP is called + NNTP (Network News Transport Protocol). + + INN supports accepting articles via either NNTP connections or via UUCP. + innd, the heart of INN, handles NNTP feeding connections directly; UUCP + newsfeeds use rnews (included in INN) to hand articles off to innd. + Other parts of INN handle feeding articles out to other news servers, + most commonly innfeed (for real-time outgoing feeds) or nntpsend and + innxmit (used to send batches of news created by innd to a remote site + via TCP/IP). INN can also handle outgoing UUCP feeds. + + The part of INN that handles connections from newsreaders is nnrpd. + + Also included in INN are a wide variety of supporting programs to handle + periodic maintenance and recovery from crashes, process special control + messages, maintain the list of active newsgroups, and generate and + record a staggering variety of statistics and summary information on the + usage and performance of the server. + + INN also supports an extremely powerful filtering system that allows the + server administrator to reject unwanted articles (such as spam and other + abuses of Usenet). + + INN is free software, supported by Internet Systems Consortium and + volunteers around the world. See "Supporting the INN Effort" below. + +Prerequisites + + Compiling INN requires an ANSI C compiler (gcc is recommended). INN was + originally written in K&R C, but supporting pre-ANSI compilers has + become enough of a headache that a lot of the newer parts of INN will no + longer compile with a non-ANSI compiler. gcc itself will compile with + most vendor non-ANSI compilers, however, so if you're stuck with one, + installing gcc is highly recommended. Not only will it let you build + INN, it will make installing lots of other software much easier. You + may also need GNU make (particularly if your system make is + BSD-derived), although most SysV make programs should work fine. + Compiling INN also currently requires a yacc implementation (bison will + do fine). + + INN uses GNU autoconf to probe the capabilities of your system, and + therefore should compile on nearly any Unix system. It does, however, + make extensive use of mmap(), which can cause problems on some older + operating systems. See INSTALL for a list of systems it is known to + work on. If you encounter problems compiling or running INN, or if you + successfully run INN on a platform that isn't listed in INSTALL, please + let us know (see "Reporting Bugs" below). + + Perl 5.003 or later is required to build INN. Perl 5.004 is required if + you want the embedded Perl filter support (which is highly recommended; + some excellent spam filters have been written for INN). Since all + versions of Perl previous to 5.004 are buggy (including security + problems) and have fewer features, installing Perl 5.004 or later is + recommended. + + If you want to enable PGP verification of control messages (highly + recommended), you will need to have a PGP implementation installed. See + INSTALL for more details. + +Getting Started + + A news server can be a fairly complicated piece of software to set up + just because of the wide variety of pieces that have to be configured + (who is authorized to read from the server, what newsgroups it carries, + and how the articles are stored on disk at a bare minimum, and if the + server isn't completely stand-alone -- and very few servers are -- both + incoming and outgoing feeds have to be set up and tested). Be prepared + to take some time to understand what's going on and how all the pieces + fit together. If you have any specific suggestions for documentation, + or comments about things that are unclear, please send them to the INN + maintainers (see "Reporting Bugs" below). + + See INSTALL for step-by-step instructions for setting up and configuring + a news server. + + INN also comes with a very complete set of man pages; there is a man + page for every configuration file and program that comes with INN. (If + you find one that doesn't have a man page, that's a bug. Please do + report it.) When trying to figure out some specific problem, reading + the man pages for all of the configuration files involved is a very good + start. + +Reporting Bugs + + We're interested in all bug reports. Not just on the programs, but on + the documentation too. Please send *all* such reports to + + inn-bugs@isc.org + + (patches are certainly welcome, see below). Even if you post to Usenet, + please CC the above address. All other INN mail should go to + + inn@isc.org + + (please do *not* send bug reports to this address). + + If you have general "how do I do this" questions or problems configuring + your server that you don't believe are due to a bug in INN, you should + post them to news.software.nntp. A lot of experienced INN users, + including several of the INN maintainers, read that newsgroup regularly. + Please don't send general questions to the above addresses; those + addresses are specifically for INN, and the INN maintainers usually + won't have time to answer general questions. + +Contributing Code + + If you have a patch or a utility that you'd like to be considered for + inclusion into INN, please mail it to + + inn-patches@isc.org + + in the body of the message (not as an attachment), or put it on a + webpage and send a link. Patches included with a bug report as + described above should follow the same procedure, but need not be sent + to both addresses (either will do). + + Have fun! + +Mailing Lists + + There are various INN-related mailing lists you can join or send + messages to if you like. Some of them you must be a member of before + you can send mail to them (thank the spammers for that policy), and one + of them is read-only (no postings allowed). + + inn-announce@isc.org Where announcements about INN are set (only + maintainers may post). + + inn-workers@isc.org Discussion of INN development (postings by + members only). + + inn-patches@isc.org Where to send patches for consideration for + inclusion into INN (open posting). + + inn-committers@isc.org CVS commit messages for INN are sent to this + list (only the automated messages are sent here, + no regular posting). + + inn-bugs@isc.org Where to send bug reports (open posting). If + you're an INN expert and have the time to help + out other users, we encourage you to join this + mailing list to answer questions. (You may also + want to read the newsgroup news.software.nntp, + which gets a lot of INN-related questions.) + + To join these lists, send a subscription request to the "-request" + address. The addresses for the above lists are: + + inn-announce-request@isc.org + inn-workers-request@isc.org + inn-patches-request@isc.org + inn-committers-request@isc.org + inn-bugs-request@isc.org + +Who's Responsible / Who to Thank + + See CONTRIBUTORS for a long list of past contributors as well as people + from the inn-workers mailing list who have dedicated a lot of time and + effort to getting this new version together. They deserve a big round + of applause. They've certainly got our thanks. + + This product includes software developed by UUNET Technologies, Inc. and + by the University of California, Berkeley and its contributors. + + Last, but certainly not least, Rich Salz, the original author of INN + deserves a lion's share of the credit for writing INN in the first place + and making it the most popular news server software on the planet (no + NNTP yet to the moon, but we plan to be there first). + +Related Packages + + INN users may also be interested in the following software packages that + work with INN or are based on it. Please note that none of this + software is developed or maintained by ISC; we don't support it and + generally can't answer questions about it. + + CleanFeed + URL: + + CleanFeed is an extremely powerful spam filter, probably the most + widely used spam filter on Usenet currently. It catches excessive + multiposting and a host of other things, and is highly configurable. + Note that it requires that INN be built with Perl support (the + --with-perl option to configure). + + GUP (Group Update Program) + URL: + + GUP provides a way for your peers to update their newsfeeds entries + as they want without having to ask you to edit the configuration + file all the time. It's useful when feeding peers who take limited + and very specific feeds that change periodically. + + inflow + URL: + + inflow generates graphs of news flow statistics in real time from + INN's logs (things like articles accepted per peer, volume accepted + per peer, and the like). + + News-Portal + URL: + + A PHP-based web news reader that works as a front-end to a regular + news server such as INN and lets people read and post without + learning a news reader. + + PersonalINN + URL: + + PersonalINN is a version of INN modified for personal use and with a + friendly GUI built on top of it. It is available for NeXTSTEP or + OPENSTEP only, unfortunately. + + suck + URL: + + suck is a separate package for downloading a news feed via a reading + connection (rather than via a direct NNTP or UUCP feed) and sending + outgoing local posts via POST. It's intended primarily for personal + or small-organization news servers who get their news via an ISP and + are too small to warrant setting up a regular news feed. + + newsx + URL: + + Serving the same purpose as suck, newsx is a separate package for + downloading a news feed via a reading connectino and sending + outgoing local posts via POST. Some people find suck easier to + configure and use, and some people find newsx easier. If you have + problems with one, try the other. + +Supporting the INN Effort + + Note that INN is supported by Internet Systems Consortium, and although + it is free for use and redistribution and incorporation into vendor + products and export and anything else you can think of, it costs money + to produce. That money comes from ISPs, hardware and software vendors, + companies who make extensive use of the software, and generally + kind-hearted folk such as yourself. + + Internet Systems Consortium has also commissioned a DHCP server + implementation and handles the official support/release of BIND. You + can learn more about the ISC's goals and accomplishments from the web + page at . + + Russ Allbery + Katsuhiro Kondou + diff --git a/TODO b/TODO new file mode 100644 index 0000000..28b655b --- /dev/null +++ b/TODO @@ -0,0 +1,847 @@ +This is a rough and informal list of suggested improvements to INN, parts +of INN that need work, and other tasks yet undone. Some of these may be +in progress, in which case the person working on them will be noted in +square brackets and should be contacted if you want to help. Otherwise, +let inn-workers@isc.org know if you'd like to work on any item listed +below. + +The list is divided into changes already tentatively scheduled for a +particular release, higher priority changes that will hopefully be done in +the near future, small or medium-scale projects for the future, and +long-term, large-scale problems. Note that just because a particular +feature is scheduled for a later release doesn't mean it can't be +completed earlier if someone decides to take it on. The association of +features with releases is intended to be a rough guide for prioritization +and a set of milestones to use to judge when a new major release is +justified. + +Also, one major thing that is *always* welcome is additions to the test +suite, which is currently very minimal. Any work done on the test suite +to allow more portions of INN to be automatically tested will make all +changes easier and will be *greatly* appreciated. + +Last modified $Id: TODO 7575 2006-09-11 22:59:38Z eagle $. + + +Scheduled for INN 2.5 + +* Rewrite configure, breaking all of the tests out into separate files + using the new capabilities in autoconf 2.5x. Replace our local macros + with the more general features provided by autoconf. At the same time, + configure.in and Makefile.global.in should be fixed to use the same + names as each other for various parameters. [Russ plans to work on + this.] + +* Add support for groups, nesting, and vectors to the new configuration + parsing code. [Russ plans on doing this.] + +* Convert readers.conf and storage.conf (and related configuration files) + to use the new parsing system and break out program-specific sections + of inn.conf into their own groups. + +* The current WIP cache and history cache should be integrated into the + history API, things like message ID hashing should become a selectable + property of the history file, and the history API should support + multiple backend storage formats and automatically select the right one + for an existing history file based on stored metainformation. + +* The interface to embedded filters needs to be reworked. The information + about which filters are enabled should be isolated in the filtering API, + and there should be standard API calls for filtering message IDs, remote + posts, and local posts. As part of this revision, all of the Perl + callbacks should be defined before any of the user code is loaded, and + the Perl loading code needs considerable cleanup. At the same time as + this is done, the implementation should really be documented; we do some + interesting things with embedded filters and it would be nice to have a + general document describing how we do it. [Russ is planning on working + on this at some point, but won't get upset if someone starts first.] + +* All of INN's documentation should be written in POD, with text and man + pages generated from the POD source. Anyone is encouraged to work on + this by just taking any existing documentation in man format and convert + it to POD while checking that it's still accurate and adding any + additional useful information that was missed. + +* Replace the current innshellvars.pl file with a real INN Perl module for + Perl programs, and include the necessary glue so that other Perl modules + can be added to INN's build tree and installed with INN, allowing their + capabilities to be available to the portions of INN written in Perl. + +* Switch nnrpd over to using the new wildmat routines rather than breaking + apart strings on commas and matching each expression separately. This + involves a lot of surgery, since PERMmatch is used all over the place, + and may change the interpretation of ! and @ in group permission + wildmats. + +* Rework and clean up the storage API. The major change is that the + initialization function should return a pointer to an opaque struct + which stores all of the state of the storage subsystem, rather than all + of that being stored in static variables, and then all other functions + should take that pointer. More of the structures should also be opaque, + all-caps structure names should be avoided in favor of named structures, + SMsetup and SMinit should be combined into one function that takes + flags, SMerrno and SMerrorstr should be replaced with functions that + return that information, and the wire format utilities should be moved + into libinn. + +* Rework and clean up the overview API. The major change is that the + initialization function should return a pointer to an opaque struct + which stores all of the state of the overview subsystem, rather than all + of that being stored in static variables, and then all other functions + should take that pointer. OVctl possibly should instead take and return + a struct rather than using an ioctl-style interface. Currently, the + overview functions do a lot of breaking apart of Xref headers and + parsing them, which is very ugly; consider having the overview interface + always key off a newsgroup name and article number, even for storing. + OVadd should probably take a structure and OVsearch should probably + return a structure. + + +Scheduled for INN 2.6 + +* Add a generic, modular anti-spam and anti-abuse filter, off by default, + but coming with INN and prominently mentioned in the INSTALL + documentation. [Andrew Gierth has work in progress that may be usable + for this.] + +* A unified configuration file combining the facilities of newsfeeds, + incoming.conf, and innfeed.conf, but hopefully more readable and easier + for new INN users to edit. This should have all of the capabilities of + the existing configuration files, but specifying common things (such as + file feeds or innfeed feeds) should be very simple and straightforward. + This configuration file should use the new parsing infrastructure. + +* Convert all remaining INN configuration files to the new parsing + infrastructure. + +* INN really should be capable of both sending and receiving a + headers-only feed (or even an overview-only feed) similar to Diablo and + using it for the same things that Diablo does, namely clustering, + pull-on-demand for articles, and the like. This should be implementable + as a new backend, although the API may need a few more hooks. Both a + straight headers-only feed that only pulls articles down via NNTP from a + remote server and a caching feed where some articles are pre-fed, some + articles are pulled down at first read, and some articles are never + stored locally should be possible. [Patches for a header-only feed have + already been written and submitted to inn-workers.] + +* The libinn, libstorage, and other library interfaces should be treated + as stable libraries and properly versioned using libtool's + recommendation for library versioning when changes are made so that they + can be installed as shared libraries and work properly through releases + of INN. This is currently waiting on a systematic review of the + interface and removal of things that we don't want to support long-term. + +* The include files necessary to use libinn, libstorage, and other + libraries should be installed in a suitable directory so that other + programs can link against them. All such include files should be under + include/inn and included with . All such include files + should only depend on other inn/* header files and not on, e.g., + config.h. All such include files should be careful about namespace to + avoid conflicts with other include files used by applications. + + +High Priority Projects + +* Modulo warnings from system headers and warnings where the compiler is + simply wrong and there's no equally readable way to rewrite the code, + INN should compile cleanly under "make warnings". It should be possible + for maintainers to routinely compile INN with make warnings to catch + problems. Note that -Wcast-qual warnings cannot be avoided entirely + because we don't want to write redundant functions for regular and const + strings and because of such things as struct iovec; -Wcast-qual will be + removed from make warnings when this task is reasonably complete. + +* INN shouldn't flush all feeds (particularly all program feeds) on + newgroup or rmgroup. Currently it reloads newsfeeds to reparse all of + the wildmat patterns and rebuild the peer lists associated with the + active file on group changes, and this forces a flush of all feeds. + The best fix is probably to stash the wildmat pattern (and flags) for + each peer when newsfeeds is read and then just using the stashed copy on + newgroup or rmgroup, since otherwise the newsfeeds loading code would + need significant modification. But in general, innd is too + reload-happy; it should be better at making incremental changes without + reloading everything. + +* Add authenticated Path support, based on the current USEFOR draft or the + behavior of some other servers (such as Diablo). [Andrew Gierth wrote a + patch for part of this a while back, which Russ has. Marco d'Itri + expressed some interest in working on this.] + +* Various parts of INN are using write or writev; they should all use + xwrite or xwritev instead. Even for writes that are unlikely to ever be + partial, on some systems system calls aren't restartable and xwrite and + xwritev properly handle EINTR returns. + +* Apparently on Solaris open can also be interrupted by a signal; we may + need to have an xopen wrapper that checks for EINTR and retries. + +* tradspool has a few annoying problems. Deleted newsgroups never have + their last articles expired, and there is no way of forcibly + resynchronizing the articles stored on disk with what overview knows + about unless tradindexed is used. Some sort of utility program to take + care of these and to do things like analyze the tradspool.map file + should be provided. + +* Rewrite inndstart as a helper program that only binds the relevant + sockets and then returns them to innd. Since file descriptors are + shared by child processes, this can be done with a program spawned by + innd. This may have gotten more complicated with IPv6. Drop + startinnfeed entirely in favor of recommending people use ulimit in the + news init script. + +* contrib/mkbuf and contrib/reset-cnfs.c should be combined into a utility + for creating and clearing cycbuffs, perhaps combined with cnfsheadconf, + and the whole thing moved into storage/cnfs rather than frontends (along + with cnfsstat). pullart.c may also stand to be merged into the same + utility (cnfs-util might not be a bad name). + + +Documentation Projects + +* Add man pages for all libinn interfaces. There should be a subdirectory + of doc/pod for this since there will be a lot of them; installing them + as libinn_
.3 seems to make the most sense (so, for example, + error handling routines would be documented in libinn_error.3). + +* Better documentation of and support for UUCP feeds. send-uucp is now + easier to use, but there's still a paucity of documentation covering the + whole theory and mechanisms of UUCP feeding. + +* Everything installed by INN should have a man page. Currently, there + are several binaries and configuration files that don't have man pages. + (In some cases, the best thing to do with the configuration file may be + to merge it into another one or find a way to eliminate it.) + +* Document the internal formats of the various overview methods, CNFS, + timehash, and timecaf. A lot of this documentation already exists in + various forms, but it needs to be cleaned up and collected in one place + for each format, preferrably as a man page. + +* Add documentation for slave servers. [Russ has articles from + inn-workers that can be used as a beginning.] + +* Write complete documentation for all of our extensions to RFC 977 or RFC + 1036, preferrably in a format that could be suitable for future + inclusion into new revisions of the RFCs. + +* Audit readers.conf.5 against perm.c for missing options ("include" at + least is missing from the documentation). + +* The distributions file is undocumented. + + +Code Cleanup Projects + +* Eliminate everything in the LEGACY section of config.h. + +* Move all compile-time configuration in config.h either into a separate + header (such as inn/options.h) or turn it into a configuration file + directive or a command-line option. In particular, the rnews + configuration should probably be an rnews-specific section of inn.conf. + +* Move include/paths.h to include/inn/paths.h and change _PATH as a prefix + to INN_PATH to move the identifiers out of the C reserved namespace. + Check to be sure we still need all of the #defines and look at adding + anything needed by innfeed (and eliminating the separate innfeed header + serving the same purpose). + +* Move include/nntp.h to include/inn/nntp.h and at the same time look at + standardizing the names of all of the #defines it provides, including + the message class. [Russ has a start on this.] + +* Get rid of GetTimeInfo and TIMEINFO. All the struct is is a struct + timeval plus time zone information. All of the parts of INN that deal + with time zone information are isolated in lib/date.c. The rest of INN + uses GetTimeInfo where a plain call to time would often work fine, or + at most gettimeofday, and there's no reason to compute the time zone + everywhere. Plus, it makes the code more readable to use standard + functions and data types. + +* putman.sh should be merged into support/install-sh (which would mean + giving up any pretext of using the standard install-sh script, but that + should be fine). + +* Use vectors or cvectors everywhere that argify and friends are currently + used and eliminate the separate implementation in nnrpd/misc.c. + +* Break up the remainder of libinn.h into multiple inn/* include files for + specific functions (such as memory management, wildmat, date handling, + NNTP commands, etc.), with an inn/util.h header to collect the remaining + random utilities. Consider adding some sort of prefix, like inn_, to all + functions that aren't part of some other logical set with its own prefix. + +* Break the CNFS and tradspool code into multiple source files to make it + easier to understand the logical divisions of the code and consider + doing the same with the other overview and storage methods. + +* Examine the (mostly socket) code that currently should probably be + compiled with -fno-strict-aliasing on gcc and move the relevant casts + to within function calls. [Russ knows about this.] + +* Clean up the use of #ifdef for sockets and IPv6, perhaps involving + addition of more to include/portable/socket.h. + + +Needed Bug Fixes + +* tradspool currently uses stdio to write out tradspool.map, which can + cause problems if more than 256 file descriptors are in use for other + things (such as incoming connections or tradindexed overview cache). + It should use write() instead. + +* LIST NEWSGROUPS should probably only list newsgroups that are marked in + the active file as valid groups. + +* INN's startup script should be sure to clean out old lock files and PID + files for innfeed. Be careful, though, since innfeed may still be + running, spawned from a previous innd. + +* makedbz should be more robust in the presence of malformed history + lines, discarding with them or otherwise dealing with them. + +* CNFS, if the cycbuff is larger than 2GB and it doesn't have large file + support, reports a mysterious file not found error because it assumes + all errors from stat are the result of the cycbuff not being found. + +* Some servers reject some IHAVE, TAKETHIS, or CHECK commands with 500 + syntax errors (particularly for long message IDs), and innfeed doesn't + handle this particularly well at the moment. It really should have an + error handler for this case. [Sven Paulus has a preliminary patch that + needs testing.] + +* Editing the active file by hand can currently munge it fairly badly even + if the server is throttled unless you reload active before restarting + the server. This could be avoidable for at least that particular case + by checking the mtime of active before and after the server was + throttled. + +* innreport silently discards news.notice entries about most of the errors + innfeed generates. It should ideally generate some summary, or at least + note that some error has occurred and the logs should be examined. + +* INN's message ID parser should be more forgiving about surrounding + whitespace. Right now, it will reject messages with a trailing space in + the Message-ID header. + +* nnrpd doesn't check the message ID of a posted article for syntactic + validity before remailing it to the moderator, since normally it relies + on innd to check the message ID. The message ID checking code from + innd/art.c should be moved into lib so that nnrpd can use it as well. + +* Currently, if the list of newsgroups on an Xref slave is out of sync + with the newsgroups on the master, receiving an article crossposted to + one of the groups that doesn't exist on the slave will cause the slave + to throttle. This isn't the best behavior; the server should either + optionally create the missing newsgroup or just ignore that crossposted + group (and modify Xref accordingly?). + +* Handling of compressed batches needs to be thoroughly reviewed by + someone who understands how they're supposed to work. It's not clear + that _PATH_GZIP is being used correctly at the moment and that + compressed batch handling will work right now on systems that don't have + gzip installed (but that do have uncompress). + +* innfeed's statistics don't add up properly all the time. All of the + article dispositions don't add up to the offered count like they should. + Some article handling must not be recorded properly. + +* innd's counting of article size doesn't always work properly, and it can + accept articles that are larger than its configured limit. It's not + clear exactly where this is happening. + +* If a channel feed exits immediately, innd respawns it immediately, + causing thrashing of the system and a huge spew of errors in syslog. It + should mark the channel as dormant for some period of time before + respawning it, perhaps only if it's already died multiple times in a + short interval. + +* ctlinnd begin was causing innd to core dump. + +* Handling of innfeed's dropped batches needs looking at. There are three + places where articles can fall between the cracks: an innfeed.togo file + written by innd when the feed can't be spawned, a batch file named after + the feed name which can be created under similar circumstances, and the + dropped files written by innfeed itself. procbatch can clean these up, + but has to be run by hand. + +* When using tradspool, groups are not immediately added to tradspool.map + when created, making innfeed unable to find the articles until after + some period of time. Part of the problem here is that tradspool only + updates tradspool.map on a lazy basis, when it sees an article in that + group, since there is no storage hook for creation of a new group. + +* nntpget doesn't handle long lines in messages. + +* WP feeds break if there are spaces in the Path header, and the inn.conf + parser doesn't check for this case and will allow people to configure + their server that way. (It's not clear that the latter is actually a + bug, given the new USEFOR attempt to allow folding of Path headers, but + the space needs to be removed for WP feeds.) + +* Error handling in the history backend needs to be reviewed, since it + currently is always printing out errno regardless of whether it's + meaningful. The error handling needs to record errno if it's useful and + the reporting function should only print it out if it's useful for that + error. + +* innd returns 437 for articles that were accepted but filed in the junk + group. It should probably return the appropriate 2xx status code in + that case instead. + +* Someone should go through the BUGS sections of all of the manpages and + fix those for which the current behavior is unacceptable. + + +Requested New Features + +* Consider implementing the HEADERS command as discussed rather + extensively in news.software.nntp. [Greg Andruk has a preliminary + patch.] + +* There have been a few requests for the ability to programmatically set + the subject of the report generated by news.daily, with escapes that are + filled in by the various pieces of information that might be useful. + +* A bulk cancel command using the MODE CANCEL interface. Possibly through + ctlinnd, although it may be a bit afield of what ctlinnd is currently + for. + +* Sven Paulus's patch for nnrpd volume reports should be integrated. See + . + +* Lots of people encrypt X-Trace in various ways. Should that be offered + as a standard option? The first data element should probably remain + unencrypted so that the O flag in newsfeeds doesn't break. + + Should there also be an option not to generate X-Trace? And this whole + area may change if USEFOR ever standardizes poster trace information; + it's been proposed to put it in the path tail instead. The current + USEFOR trend as of January, 2001 appears to be towards an Injector-Info + header with this information, allowing a token or an injecting hostname. + For a token, one really wants it to be hierarchically structured for + spam filtering even if it's encrypted (in other words, to get a "group" + of clients, one could just match the first n bytes of the token instead + of the whole thing). + + Olaf Titz suggests: + + This can be done by formatting the (rest of) the header in a way + that fields are always a multiple of 8 bytes and applying a 64 bit + block cipher in ECB mode on it. But then we would be better off + using binary fields, as the timestamp is 9 bytes and an IP address + 10-12 bytes. + + Combining the timestamp and PID into one block, adding an + authenticated user field and omitting the redundant formatted time + would give the following format: + + X-Trace: g212.hadiko.de [395109AA000016FF] [AC14302A00000000] [...] + time | pid ip |reserved user + +* ctlinnd flushlogs currently renames all of the log files. It would be + nice to support the method of log rotation that most other daemons + support, namely to move the logs aside and then tell innd to reopen its + log files. Ideally, that behavior would be triggered with a SIGHUP. + scanlogs would have to be modified to handle this. + + The best way to support this seems to be to leave scanlogs as is by + default, but also add two additional modes. One would flush all the + logs and prepare for the syslog logs to be rotated, and the other would + do all the work needed after the logs have been rotated. That way, if + someone wanted to plug in a separate log rotation handler, they could do + so and just call scanlogs on either side of it. The reporting portions + of scanlogs should be in a separate program. + +* Several people have Perl interfaces to pieces of INN that should ideally + be part of the INN source tree in some fashion. Greg Andruk has a bunch + of stuff that Russ has copies of, for example. + +* Investigate using the new, stricter date parsing code in libinn for + nnrpd rather than the extremely lenient parsedate routine. + +* There are various available patches for Cancel-Lock and an Internet + draft; support should be added to INN for both generation and + verification (definitely optional and not on by default at this point). + +* It would be nice to be able to reload inn.conf (although difficult, due + to the amount of data that's generated from it and stashed in various + places). This will need to wait for the new configuration parsing + library and an inn.conf parser that uses it. + +* remembertrash currently rejects and remembers articles with syntax + errors as well as things like unwanted newsgroups and unwanted + distributions, which means that if a peer sends you a bunch of mangled + articles, you'll then also reject the correct versions of the articles + from other peers. This should probably be rethought. + +* Additional limits for readers.conf: Limit on concurrent parallel reader + streams, limit on KB/second download (preliminary support for this is + already in), and a limit on maximum posted articles per day (tied in + with the backoff stuff?). These should be per-IP or per-user, but + possibly also per-access group. (Consider pulling the -H, -T, -X, and + -i code out from innd and using it here.) + +* timecaf should have more configurable parameters (at the least, how + frequently to switch to a new CAF file should be an option). + storage.conf should really be extended to allow method-specific + configuration for things like this (and to allow the cycbuff.conf file + to be merged into storage.conf). + +* Allow generation of arbitrary additional information that could go in + overview by using embedded Perl or Python code. This might be a cleaner + way to do the keywords code, which really wants Perl's regex engine + ideally. It would also let one do something like doing MD5 hashes of + each article and putting that in the overview if you care a lot about + making sure that articles aren't corrupted. + +* Allow some way of accepting articles regardless of the Date header, even + if it's far into the future. Some people are running into articles that + are dated years into the future for some reason that they still want to + store on the server. + +* There was a request to make --program-suffix and the other name + transformation options to autoconf work. The standard GNU package does + this with really ugly sed commands in the Makefile rules; we could + probably do better, perhaps by substituting the autoconf results into + support/install-sh. + +* INN currently uses hash tables to store the active file internally. It + would be worth trying ternary search trees to see if they're faster; the + data structure is simpler, performance may be comparable for hits and + significantly better for misses, sizing and resizing becomes a non-issue, + and the space penalty isn't too bad. A generic implementation is already + available in libinn. (An even better place to use ternary search trees + may be the configuration parser.) + +* Provide an innshellvars equivalent for Python. + +* inncheck should check the syntax of all the various files that are + returned by LIST commands, since having those files present with the + wrong syntax could result in non-compliant responses from the server. + Possibly the server should also refuse to send malformatted lines to + the client. + +* ctlinnd reload incoming.conf could return a count of the hosts that + failed, or even better a list of them. This would make pruning old + stuff out of incoming.conf much easier. + +* nnrpd could use sendfile(2), if available, to send articles directly + to the socket (for those storage methods where to-wire conversion is + not needed). This would need to be added to the storage API. + +* Somebody should look at keeping the "newsgroups" file more accurate + (e.g. newgroups for existing groups should change description, better + checkgroups handling, checking for duplicates) + +* The by-domain statistics innreport generates for nnrpd count all local + connections (those with no "." in the hostname) in with the errors as + just "?". The host2dom function could be updated to group these as + something like "Local". + +* news.daily could detect if expire segfaults and unpause the server. + +* When using SSL, track the amount of data that's been transferred to the + client and periodically renegotiate the session key. + +* When using SSL, use SSL_get_peer to get a verified client certificate, + if available, and use it to create an additional header line when + posting articles (X-Auth-Poster?). This header could use: + + X509_NAME_oneline(X509_get_subject_name(peer),...) + + for the full distinguished name, or + + X509_name_get_text_by_NID(X509_get_subject_name(peer), + NID_commonName, ...) + + for the client's "common name" alone. + +* When using SSL, use the server's key to generate an HMAC of the body of + the message (and most headers?), then include that digest in the + headers. This allows a news administrator to determine if a complaint + about the content of a message is fradulent since the message was + changed after transmission. + + +General Projects + +* All the old packages in unoff-contrib should be reviewed for integration + into INN. + +* It may be better for INN on SysV-derived systems to use poll rather than + select. The semantics are better, and on some systems (such as Solaris) + select is limited to 1024 file descriptors whereas poll can handle any + number. Unfortunately, the API is drastically different between the + two and poll isn't portable, so supporting both cleanly would require a + bit of thought. + +* Currently only innd and innfeed increase their file descriptor limits. + Other parts of INN, notably makehistory, may benefit from doing the same + thing if they can without root privileges. + +* The Tcl filtering support code has undergone serious bitrot and needs + some work to fix it and make it work with modern versions of Tcl and the + current version of INN. It also lacks a lot of the functionality of the + Perl and Python filters, if anyone cares. + +* Revisit support for aliased groups and what nnrpd does with them. + Should posts to the alias automatically be redirected to the real group? + Regardless, the error return should provide useful information about + where to post instead. Also, the new overview API, for at least some of + the overview methods, truncated the group status at one character and + lost the name of the group to which a group is aliased; that needs to be + fixed. + +* More details as to why a message ID is bad would be useful to return to + the user, particularly for rnews, inews, etc. innd also rejects message + IDs with trailing spaces, which can be hard to check. + +* Support putting the active file and history file in different + directories without hand-editing a bunch of files. + +* nnrpd's NNTP command parsing interacts poorly with AUTHINFO and + passwords containing spaces. The correct solution isn't clear; check + with the current NNTP RFC draft and how existing clients handle it? + +* frontends/pullnews and contrib/backupfeed solve the same problem; the + best ideas of both should be unified into one script. + +* actsyncd could stand a rewrite and cleaner handling of both + configuration and syncing against multiple sources which are canonical + for different sets of groups. + +* send-nntp and nntpsend basically do the same thing; send-nntp could + probably be removed (possibly with some extra support in nntpsend for + doing simpler things). + + +Long-Term Projects + +* Look at turning header parsing into a library of some sort. Lots of INN + does this, but different parts of INN need subtly different things, so + the best best API is unclear. + +* INN's header handling needs to be checked against the current USEFOR + draft. This may want wait until after we have a header parsing library. + +* The innd filter should be able to specify additional or replacement + groups into which an article should be filed, or even spool the article + to a local disk file rather than storing it. (See the stuff that the + nnrpd filter can already do.) + +* Add authentication via SASL to nnrpd. This is a boatload of additional + issues, particularly if we want to add authentication methods like + Kerberos that require their own separate libraries (although we should + use Cyrus's SASL libraries, which will simplify a lot of that). + [Jeffrey Vinocur is working on a standard for this.] + +* When articles expire out of a storage method with self-expire + functionality, the overview and history entries for those articles + should also be expired immediately. Otherwise, things like the GROUP + command don't give the correct results. This will likely require a + callback that can be passed to CNFS that is called to do the overview + and history cleanup for each article overwritten. It will also require + the new history API. + +* Feed control, namely allowing your peers to set policy on what articles + you feed them (not just newsgroups but max article size and perhaps even + filter properties like "non-binary"). Every site does this a bit + differently. Some people have web interfaces, some people use GUP, some + people roll their own alternate things. It would really be nice to have + some good way of doing this as part of INN. It's worth considering an + NNTP extension for this purpose, although the first step is to build a + generic interface that an NNTP extension, a web page, etc. could all + use. (An alternate way of doing this would be to extend IHAVE to pass + the list of newsgroups as part of the command, although this doesn't + seem as generally useful.) + +* Traffic classification as an extension of filtering. The filter should + be able to label traffic as binary (e.g.) without rejecting it, and + newsfeeds should be extended to allow feeding only non-binary articles + (e.g.) to a peer. + +* External authenticators should also be able to do things like return a + list of groups that a person is allowed to read or post to. Currently, + maintaining a set of users and a set of groups, each of which some + subset of the users is allowed to access, is far too difficult. For a + good starting list of additional functionality that should be made + available, look at everything the Perl authentication hooks can do. + This should probably wait for the configuration file parsing rewrite. + +* Allow nnrpd to spawn long-running helper processes. Not only would this + be useful for handling authentication (so that the auth hooks could work + without execing a program on every connection), but it may allow for + other architectures for handling requests (such as a pool of helpers + that deal only with overview requests). More than that, nnrpd should + *be* a long-running helper process that innd can feed open file + descriptors to. [Aidan Culley has ideas along these lines.] + +* The tradspool storage method requires assigning a number to every + newsgroup (for use in a token). Currently this is maintained in a + separate tradspool.map file, but it would be much better to keep that + information in the active file where it can't drop out of sync. A code + assigned to each newsgroup would be useful for other things as well, + such as hashing the directories for the tradindexed overview. For use + for that purpose, though, the active file would have to be extended to + include removed groups, since they'd need to be kept in the active file + to reserve their numbers until the last articles expired. + +* The locking of the active file leaves something to be desired; in + general, the locking in INN (for the active file, the history file, + spool updates, overview updates, and the like) needs a thorough + inspection and some cleanup. A good place to start would be tracing + through the pause and throttle code and write up a clear description of + what gets locked where and what is safely restarted and what isn't. + Long term, there needs to be a library locking routine used by + *everything* that needs to write to the history file, active file, etc. + and that keeps track of the PID of the process locking things and is + accessible via ctlinnd. + +* There is a fundamental problem with the current design of the + control.ctl file. It combines two things: A database of hierarchies, + their maintainers, and related information, and a list of which + hierarchies the local server should honor. These should be separated + out into the database (which could mostly be updated from a remote + source like ftp.isc.org and then combined with local additions) and a + configured list of hierarchies (or sub-hierarchies within hierarchies) + that control messages should be honored for. This should be reasonably + simple although correct handling of checkgroups could get a mite tricky. + +* Possible NNTP extension: Compression of the protocol, using gzip, + bzip2, or some other technique. Particularly useful for long lists like + the active file information or the overview information, but possibly + useful in general for other things. + +* Install wizards. Configuring INN is currently very complex even for an + experienced news admin, and there are several fairly standard + configurations that shouldn't be nearly that complicated to get running + out of the box. A little interactive Perl script asking some simple + questions could probably get a lot of cases easily right. + +* One ideally wants to be able to easily convert between different + overview formats or storage methods, refiling articles in place. This + should be possible once we have a history API that allows changing the + storage location of an article in-place. + +* Set up the infrastructure required so that INN can use alloca. This + would significantly decrease the number of calls to malloc needed and + would be a lot more convenient. + +* A serious investigation into whether INN could use a garbage collector + is probably a good idea. The network buffers probably need to be + handled with decidated code, but there are a lot of other incidental + allocations and deallocations that may be much more efficient and safer + using a garbage collector. + +* Look at integrating asprintf and vasprintf. Russ already tried this + once and couldn't see a good way of doing it (particularly vasprintf) + without hooking deep into an sprintf implementation, because the simple + hack of calling vsnprintf first, allocating that much memory, and then + calling it again on the new buffer doesn't work for vasprintf (you can't + reprocess the arguments). + +* Support building in a separate directory than the source tree. It may + be best to just support this via lndir rather than try to do it in + configure, but it would be ideal to add support for this to the autoconf + system. Unfortunately, the standard method requires letting configure + generate all of the makefiles, which would make running configure and + config.status take much longer than it does currently. + +* Look at adding some kind of support for MODE CANCEL via network sockets + and fixing up the protocol so that it could possibly be standardized + (the easiest thing to do would probably be to change it into a CANCEL + command). If we want to get to the point where INN can accept and even + propagate such feeds from dedicated spam filters or the like, there must + also be some mechanism of negotiating policy in order to decide what + cancels the server wants to be fed. + +* The "possibly signed" char data type is one of the inherent flaws of C. + Some other projects have successfully gotten completely away from this + by declaring all of their strings to be unsigned char, defining a macro + like U that casts strings to unsigned char for use with literal strings, + and always using unsigned char everywhere. Unfortunately, this also + requires wrappering all of the standard libc string functions, since + they're prototyped as taking char rather than unsigned char. The + benefits include cleaner and consistent handling of characters over 127, + better warnings from the compiler, consistent behavior across platforms + with different notions about the signedness of char, and the elimination + of warnings from the macros on platforms like Solaris where + those macros can't handle signed characters. We should look at doing + this for INN. + +* It would clean up a lot of code considerably if we could just use mmap + semantics regardless of whether the system has mmap. It may be possible + to emulate mmap on systems that don't have it by reading the entirety of + the file into memory and setting the flags that require things to call + mmap_flush and mmap_invalidate on a regular basis, but it's not clear + where to stash the file descriptor that corresponds to the mapped file. + +* Figure out some Samba library that we can link against for the Samba + authenticator so that we can get all the Samba code back out of INN's + source tree; we don't want to maintain it. + +* Consider replacing the awkward access: parameter in readers.conf with + separate commands (e.g. "allow_newnews: true") or otherwise cleaning up + the interaction between access: and read:/post:. Note that at least + allownewnews: can be treated as a setting for overriding inn.conf and + should be very easy to add. + +* Add a localport: parameter (similar to localaddress:) to readers.conf + auth groups. With those two parameters (and ssl_required:) we + essentially eliminate the need to run multiple instances of nnrpd just to + use different configurations. + +* Various things may break when trying to use data written while compiled + with large file support using a server that wasn't so compiled (and vice + versa). The main one is the history file, but tradindexed is also + affected and buffindexed has been reported to have problems with this + as well. Ideally, all of INN's data files should be as portable as + possible. + + +Complete Code Reorganization + +At some point, we should probably abandon and archive the current CVS +repository, reimport all of the current source files, and start with a +fresh repository with a better revision control system such as Subversion. +A better revision control system would let us rename and move things +around arbitrarily, something CVS doesn't handle at all well. Should this +ever be done, we should consider doing all of the following at the same +time: + +* Don't include any generated files in the CVS tree. Maintainers should + have autoconf and friends, pod2text and pod2man, and bison around anyway. + This would save a bunch of extra check-ins, remove the danger of the + generated files getting out of sync, and drastically reduce the + repository size in the case of configure. + +* Don't include any of the generated man pages in the CVS tree, as an + additional case of the above. All of the documentation should be in POD + and we can generate the man pages as part of the snapshot process. + +* storage should be reserved just for article storage; the overview + methods should be in a separate overview tree. + +* The split between frontends and backends is highly non-intuitive. Some + better organization scheme should be arrived at. Perhaps something + related to incoming and outgoing, with programs like cnfsstat moved into + the storage directory with the other storage-related code? + +* Add a separate utils directory for things like convdate, shlock, + shrinkfile, and the like. Some of the scripts may possibly want to go + into that directory too. + +* The lib directory possibly should be split so that it contains only code + always compiled and part of INN, and the various replacements for + possibly missing system routines are in a separate directory (such as + replace). These should possibly be separate libraries; there are things + that currently link against libinn that only need the portability + pieces. + +* The doc directory really should be broken down further by type of + documentation or section or something; it's getting a bit unwieldy. + +* Untabify and reformat all of the code according to a consistent coding + style which would then be enforced for all future check-ins. diff --git a/aclocal.m4 b/aclocal.m4 new file mode 100644 index 0000000..066bf6a --- /dev/null +++ b/aclocal.m4 @@ -0,0 +1,3573 @@ +# libtool.m4 - Configure libtool for the host system. -*-Shell-script-*- +## Copyright 1996, 1997, 1998, 1999, 2000, 2001 +## Free Software Foundation, Inc. +## Originally by Gordon Matzigkeit , 1996 +## +## This program is free software; you can redistribute it and/or modify +## it under the terms of the GNU General Public License as published by +## the Free Software Foundation; either version 2 of the License, or +## (at your option) any later version. +## +## This program is distributed in the hope that it will be useful, but +## WITHOUT ANY WARRANTY; without even the implied warranty of +## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +## General Public License for more details. +## +## You should have received a copy of the GNU General Public License +## along with this program; if not, write to the Free Software +## Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +## +## As a special exception to the GNU General Public License, if you +## distribute this file as part of a program that contains a +## configuration script generated by Autoconf, you may include it under +## the same distribution terms that you use for the rest of that program. + +# serial 46 AC_PROG_LIBTOOL + +AC_DEFUN([AC_PROG_LIBTOOL], +[AC_REQUIRE([AC_LIBTOOL_SETUP])dnl + +# This can be used to rebuild libtool when needed +LIBTOOL_DEPS="$ac_aux_dir/ltmain.sh" + +# Always use our own libtool. +LIBTOOL='$(SHELL) $(top_builddir)/libtool' +AC_SUBST(LIBTOOL)dnl + +# Prevent multiple expansion +define([AC_PROG_LIBTOOL], []) +]) + +AC_DEFUN([AC_LIBTOOL_SETUP], +[AC_PREREQ(2.13)dnl +AC_REQUIRE([AC_ENABLE_SHARED])dnl +AC_REQUIRE([AC_ENABLE_STATIC])dnl +AC_REQUIRE([AC_ENABLE_FAST_INSTALL])dnl +AC_REQUIRE([AC_CANONICAL_HOST])dnl +AC_REQUIRE([AC_CANONICAL_BUILD])dnl +AC_REQUIRE([AC_PROG_CC])dnl +AC_REQUIRE([AC_PROG_LD])dnl +AC_REQUIRE([AC_PROG_LD_RELOAD_FLAG])dnl +AC_REQUIRE([AC_PROG_NM])dnl +AC_REQUIRE([AC_PROG_LN_S])dnl +AC_REQUIRE([AC_DEPLIBS_CHECK_METHOD])dnl +AC_REQUIRE([AC_OBJEXT])dnl +AC_REQUIRE([AC_EXEEXT])dnl +dnl + +_LT_AC_PROG_ECHO_BACKSLASH +# Only perform the check for file, if the check method requires it +case $deplibs_check_method in +file_magic*) + if test "$file_magic_cmd" = '$MAGIC_CMD'; then + AC_PATH_MAGIC + fi + ;; +esac + +AC_CHECK_TOOL(RANLIB, ranlib, :) +AC_CHECK_TOOL(STRIP, strip, :) + +ifdef([AC_PROVIDE_AC_LIBTOOL_DLOPEN], enable_dlopen=yes, enable_dlopen=no) +ifdef([AC_PROVIDE_AC_LIBTOOL_WIN32_DLL], +enable_win32_dll=yes, enable_win32_dll=no) + +AC_ARG_ENABLE(libtool-lock, + [ --disable-libtool-lock avoid locking (might break parallel builds)]) +test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes + +# Some flags need to be propagated to the compiler or linker for good +# libtool support. +case $host in +*-*-irix6*) + # Find out which ABI we are using. + echo '[#]line __oline__ "configure"' > conftest.$ac_ext + if AC_TRY_EVAL(ac_compile); then + case `/usr/bin/file conftest.$ac_objext` in + *32-bit*) + LD="${LD-ld} -32" + ;; + *N32*) + LD="${LD-ld} -n32" + ;; + *64-bit*) + LD="${LD-ld} -64" + ;; + esac + fi + rm -rf conftest* + ;; + +*-*-sco3.2v5*) + # On SCO OpenServer 5, we need -belf to get full-featured binaries. + SAVE_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS -belf" + AC_CACHE_CHECK([whether the C compiler needs -belf], lt_cv_cc_needs_belf, + [AC_LANG_SAVE + AC_LANG_C + AC_TRY_LINK([],[],[lt_cv_cc_needs_belf=yes],[lt_cv_cc_needs_belf=no]) + AC_LANG_RESTORE]) + if test x"$lt_cv_cc_needs_belf" != x"yes"; then + # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf + CFLAGS="$SAVE_CFLAGS" + fi + ;; + +ifdef([AC_PROVIDE_AC_LIBTOOL_WIN32_DLL], +[*-*-cygwin* | *-*-mingw* | *-*-pw32*) + AC_CHECK_TOOL(DLLTOOL, dlltool, false) + AC_CHECK_TOOL(AS, as, false) + AC_CHECK_TOOL(OBJDUMP, objdump, false) + + # recent cygwin and mingw systems supply a stub DllMain which the user + # can override, but on older systems we have to supply one + AC_CACHE_CHECK([if libtool should supply DllMain function], lt_cv_need_dllmain, + [AC_TRY_LINK([], + [extern int __attribute__((__stdcall__)) DllMain(void*, int, void*); + DllMain (0, 0, 0);], + [lt_cv_need_dllmain=no],[lt_cv_need_dllmain=yes])]) + + case $host/$CC in + *-*-cygwin*/gcc*-mno-cygwin*|*-*-mingw*) + # old mingw systems require "-dll" to link a DLL, while more recent ones + # require "-mdll" + SAVE_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS -mdll" + AC_CACHE_CHECK([how to link DLLs], lt_cv_cc_dll_switch, + [AC_TRY_LINK([], [], [lt_cv_cc_dll_switch=-mdll],[lt_cv_cc_dll_switch=-dll])]) + CFLAGS="$SAVE_CFLAGS" ;; + *-*-cygwin* | *-*-pw32*) + # cygwin systems need to pass --dll to the linker, and not link + # crt.o which will require a WinMain@16 definition. + lt_cv_cc_dll_switch="-Wl,--dll -nostartfiles" ;; + esac + ;; + ]) +esac + +_LT_AC_LTCONFIG_HACK + +]) + +# AC_LIBTOOL_HEADER_ASSERT +# ------------------------ +AC_DEFUN([AC_LIBTOOL_HEADER_ASSERT], +[AC_CACHE_CHECK([whether $CC supports assert without backlinking], + [lt_cv_func_assert_works], + [case $host in + *-*-solaris*) + if test "$GCC" = yes && test "$with_gnu_ld" != yes; then + case `$CC --version 2>/dev/null` in + [[12]].*) lt_cv_func_assert_works=no ;; + *) lt_cv_func_assert_works=yes ;; + esac + fi + ;; + esac]) + +if test "x$lt_cv_func_assert_works" = xyes; then + AC_CHECK_HEADERS(assert.h) +fi +])# AC_LIBTOOL_HEADER_ASSERT + +# _LT_AC_CHECK_DLFCN +# -------------------- +AC_DEFUN([_LT_AC_CHECK_DLFCN], +[AC_CHECK_HEADERS(dlfcn.h) +])# _LT_AC_CHECK_DLFCN + +# AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE +# --------------------------------- +AC_DEFUN([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE], +[AC_REQUIRE([AC_CANONICAL_HOST]) +AC_REQUIRE([AC_PROG_NM]) +AC_REQUIRE([AC_OBJEXT]) +# Check for command to grab the raw symbol name followed by C symbol from nm. +AC_MSG_CHECKING([command to parse $NM output]) +AC_CACHE_VAL([lt_cv_sys_global_symbol_pipe], [dnl + +# These are sane defaults that work on at least a few old systems. +# [They come from Ultrix. What could be older than Ultrix?!! ;)] + +# Character class describing NM global symbol codes. +symcode='[[BCDEGRST]]' + +# Regexp to match symbols that can be accessed directly from C. +sympat='\([[_A-Za-z]][[_A-Za-z0-9]]*\)' + +# Transform the above into a raw symbol and a C symbol. +symxfrm='\1 \2\3 \3' + +# Transform an extracted symbol line into a proper C declaration +lt_cv_global_symbol_to_cdecl="sed -n -e 's/^. .* \(.*\)$/extern char \1;/p'" + +# Transform an extracted symbol line into symbol name and symbol address +lt_cv_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode \([[^ ]]*\) \([[^ ]]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'" + +# Define system-specific variables. +case $host_os in +aix*) + symcode='[[BCDT]]' + ;; +cygwin* | mingw* | pw32*) + symcode='[[ABCDGISTW]]' + ;; +hpux*) # Its linker distinguishes data from code symbols + lt_cv_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern char \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" + lt_cv_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'" + ;; +irix*) + symcode='[[BCDEGRST]]' + ;; +solaris* | sysv5*) + symcode='[[BDT]]' + ;; +sysv4) + symcode='[[DFNSTU]]' + ;; +esac + +# Handle CRLF in mingw tool chain +opt_cr= +case $host_os in +mingw*) + opt_cr=`echo 'x\{0,1\}' | tr x '\015'` # option cr in regexp + ;; +esac + +# If we're using GNU nm, then use its standard symbol codes. +if $NM -V 2>&1 | egrep '(GNU|with BFD)' > /dev/null; then + symcode='[[ABCDGISTW]]' +fi + +# Try without a prefix undercore, then with it. +for ac_symprfx in "" "_"; do + + # Write the raw and C identifiers. +lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[ ]]\($symcode$symcode*\)[[ ]][[ ]]*\($ac_symprfx\)$sympat$opt_cr$/$symxfrm/p'" + + # Check to see that the pipe works correctly. + pipe_works=no + rm -f conftest* + cat > conftest.$ac_ext < $nlist) && test -s "$nlist"; then + # Try sorting and uniquifying the output. + if sort "$nlist" | uniq > "$nlist"T; then + mv -f "$nlist"T "$nlist" + else + rm -f "$nlist"T + fi + + # Make sure that we snagged all the symbols we need. + if egrep ' nm_test_var$' "$nlist" >/dev/null; then + if egrep ' nm_test_func$' "$nlist" >/dev/null; then + cat < conftest.$ac_ext +#ifdef __cplusplus +extern "C" { +#endif + +EOF + # Now generate the symbol file. + eval "$lt_cv_global_symbol_to_cdecl"' < "$nlist" >> conftest.$ac_ext' + + cat <> conftest.$ac_ext +#if defined (__STDC__) && __STDC__ +# define lt_ptr void * +#else +# define lt_ptr char * +# define const +#endif + +/* The mapping between symbol names and symbols. */ +const struct { + const char *name; + lt_ptr address; +} +lt_preloaded_symbols[[]] = +{ +EOF + sed "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (lt_ptr) \&\2},/" < "$nlist" >> conftest.$ac_ext + cat <<\EOF >> conftest.$ac_ext + {0, (lt_ptr) 0} +}; + +#ifdef __cplusplus +} +#endif +EOF + # Now try linking the two files. + mv conftest.$ac_objext conftstm.$ac_objext + save_LIBS="$LIBS" + save_CFLAGS="$CFLAGS" + LIBS="conftstm.$ac_objext" + CFLAGS="$CFLAGS$no_builtin_flag" + if AC_TRY_EVAL(ac_link) && test -s conftest; then + pipe_works=yes + fi + LIBS="$save_LIBS" + CFLAGS="$save_CFLAGS" + else + echo "cannot find nm_test_func in $nlist" >&AC_FD_CC + fi + else + echo "cannot find nm_test_var in $nlist" >&AC_FD_CC + fi + else + echo "cannot run $lt_cv_sys_global_symbol_pipe" >&AC_FD_CC + fi + else + echo "$progname: failed program was:" >&AC_FD_CC + cat conftest.$ac_ext >&5 + fi + rm -f conftest* conftst* + + # Do not use the global_symbol_pipe unless it works. + if test "$pipe_works" = yes; then + break + else + lt_cv_sys_global_symbol_pipe= + fi +done +]) +global_symbol_pipe="$lt_cv_sys_global_symbol_pipe" +if test -z "$lt_cv_sys_global_symbol_pipe"; then + global_symbol_to_cdecl= + global_symbol_to_c_name_address= +else + global_symbol_to_cdecl="$lt_cv_global_symbol_to_cdecl" + global_symbol_to_c_name_address="$lt_cv_global_symbol_to_c_name_address" +fi +if test -z "$global_symbol_pipe$global_symbol_to_cdec$global_symbol_to_c_name_address"; +then + AC_MSG_RESULT(failed) +else + AC_MSG_RESULT(ok) +fi +]) # AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE + +# _LT_AC_LIBTOOL_SYS_PATH_SEPARATOR +# --------------------------------- +AC_DEFUN([_LT_AC_LIBTOOL_SYS_PATH_SEPARATOR], +[# Find the correct PATH separator. Usually this is `:', but +# DJGPP uses `;' like DOS. +if test "X${PATH_SEPARATOR+set}" != Xset; then + UNAME=${UNAME-`uname 2>/dev/null`} + case X$UNAME in + *-DOS) lt_cv_sys_path_separator=';' ;; + *) lt_cv_sys_path_separator=':' ;; + esac + PATH_SEPARATOR=$lt_cv_sys_path_separator +fi +])# _LT_AC_LIBTOOL_SYS_PATH_SEPARATOR + +# _LT_AC_PROG_ECHO_BACKSLASH +# -------------------------- +# Add some code to the start of the generated configure script which +# will find an echo command which doesn't interpret backslashes. +AC_DEFUN([_LT_AC_PROG_ECHO_BACKSLASH], +[ifdef([AC_DIVERSION_NOTICE], [AC_DIVERT_PUSH(AC_DIVERSION_NOTICE)], + [AC_DIVERT_PUSH(NOTICE)]) +_LT_AC_LIBTOOL_SYS_PATH_SEPARATOR + +# Check that we are running under the correct shell. +SHELL=${CONFIG_SHELL-/bin/sh} + +case X$ECHO in +X*--fallback-echo) + # Remove one level of quotation (which was required for Make). + ECHO=`echo "$ECHO" | sed 's,\\\\\[$]\\[$]0,'[$]0','` + ;; +esac + +echo=${ECHO-echo} +if test "X[$]1" = X--no-reexec; then + # Discard the --no-reexec flag, and continue. + shift +elif test "X[$]1" = X--fallback-echo; then + # Avoid inline document here, it may be left over + : +elif test "X`($echo '\t') 2>/dev/null`" = 'X\t'; then + # Yippee, $echo works! + : +else + # Restart under the correct shell. + exec $SHELL "[$]0" --no-reexec ${1+"[$]@"} +fi + +if test "X[$]1" = X--fallback-echo; then + # used as fallback echo + shift + cat </dev/null && + echo_test_string="`eval $cmd`" && + (test "X$echo_test_string" = "X$echo_test_string") 2>/dev/null + then + break + fi + done +fi + +if test "X`($echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + : +else + # The Solaris, AIX, and Digital Unix default echo programs unquote + # backslashes. This makes it impossible to quote backslashes using + # echo "$something" | sed 's/\\/\\\\/g' + # + # So, first we look for a working echo in the user's PATH. + + IFS="${IFS= }"; save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for dir in $PATH /usr/ucb; do + if (test -f $dir/echo || test -f $dir/echo$ac_exeext) && + test "X`($dir/echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($dir/echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + echo="$dir/echo" + break + fi + done + IFS="$save_ifs" + + if test "X$echo" = Xecho; then + # We didn't find a better echo, so look for alternatives. + if test "X`(print -r '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`(print -r "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + # This shell has a builtin print -r that does the trick. + echo='print -r' + elif (test -f /bin/ksh || test -f /bin/ksh$ac_exeext) && + test "X$CONFIG_SHELL" != X/bin/ksh; then + # If we have ksh, try running configure again with it. + ORIGINAL_CONFIG_SHELL=${CONFIG_SHELL-/bin/sh} + export ORIGINAL_CONFIG_SHELL + CONFIG_SHELL=/bin/ksh + export CONFIG_SHELL + exec $CONFIG_SHELL "[$]0" --no-reexec ${1+"[$]@"} + else + # Try using printf. + echo='printf %s\n' + if test "X`($echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + # Cool, printf works + : + elif echo_testing_string=`($ORIGINAL_CONFIG_SHELL "[$]0" --fallback-echo '\t') 2>/dev/null` && + test "X$echo_testing_string" = 'X\t' && + echo_testing_string=`($ORIGINAL_CONFIG_SHELL "[$]0" --fallback-echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + CONFIG_SHELL=$ORIGINAL_CONFIG_SHELL + export CONFIG_SHELL + SHELL="$CONFIG_SHELL" + export SHELL + echo="$CONFIG_SHELL [$]0 --fallback-echo" + elif echo_testing_string=`($CONFIG_SHELL "[$]0" --fallback-echo '\t') 2>/dev/null` && + test "X$echo_testing_string" = 'X\t' && + echo_testing_string=`($CONFIG_SHELL "[$]0" --fallback-echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + echo="$CONFIG_SHELL [$]0 --fallback-echo" + else + # maybe with a smaller string... + prev=: + + for cmd in 'echo test' 'sed 2q "[$]0"' 'sed 10q "[$]0"' 'sed 20q "[$]0"' 'sed 50q "[$]0"'; do + if (test "X$echo_test_string" = "X`eval $cmd`") 2>/dev/null + then + break + fi + prev="$cmd" + done + + if test "$prev" != 'sed 50q "[$]0"'; then + echo_test_string=`eval $prev` + export echo_test_string + exec ${ORIGINAL_CONFIG_SHELL-${CONFIG_SHELL-/bin/sh}} "[$]0" ${1+"[$]@"} + else + # Oops. We lost completely, so just stick with echo. + echo=echo + fi + fi + fi + fi +fi +fi + +# Copy echo and quote the copy suitably for passing to libtool from +# the Makefile, instead of quoting the original, which is used later. +ECHO=$echo +if test "X$ECHO" = "X$CONFIG_SHELL [$]0 --fallback-echo"; then + ECHO="$CONFIG_SHELL \\\$\[$]0 --fallback-echo" +fi + +AC_SUBST(ECHO) +AC_DIVERT_POP +])# _LT_AC_PROG_ECHO_BACKSLASH + +# _LT_AC_TRY_DLOPEN_SELF (ACTION-IF-TRUE, ACTION-IF-TRUE-W-USCORE, +# ACTION-IF-FALSE, ACTION-IF-CROSS-COMPILING) +# ------------------------------------------------------------------ +AC_DEFUN([_LT_AC_TRY_DLOPEN_SELF], +[if test "$cross_compiling" = yes; then : + [$4] +else + AC_REQUIRE([_LT_AC_CHECK_DLFCN])dnl + lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 + lt_status=$lt_dlunknown + cat > conftest.$ac_ext < +#endif + +#include + +#ifdef RTLD_GLOBAL +# define LT_DLGLOBAL RTLD_GLOBAL +#else +# ifdef DL_GLOBAL +# define LT_DLGLOBAL DL_GLOBAL +# else +# define LT_DLGLOBAL 0 +# endif +#endif + +/* We may have to define LT_DLLAZY_OR_NOW in the command line if we + find out it does not work in some platform. */ +#ifndef LT_DLLAZY_OR_NOW +# ifdef RTLD_LAZY +# define LT_DLLAZY_OR_NOW RTLD_LAZY +# else +# ifdef DL_LAZY +# define LT_DLLAZY_OR_NOW DL_LAZY +# else +# ifdef RTLD_NOW +# define LT_DLLAZY_OR_NOW RTLD_NOW +# else +# ifdef DL_NOW +# define LT_DLLAZY_OR_NOW DL_NOW +# else +# define LT_DLLAZY_OR_NOW 0 +# endif +# endif +# endif +# endif +#endif + +#ifdef __cplusplus +extern "C" void exit (int); +#endif + +void fnord() { int i=42;} +int main () +{ + void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); + int status = $lt_dlunknown; + + if (self) + { + if (dlsym (self,"fnord")) status = $lt_dlno_uscore; + else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; + /* dlclose (self); */ + } + + exit (status); +}] +EOF + if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext} 2>/dev/null; then + (./conftest; exit; ) 2>/dev/null + lt_status=$? + case x$lt_status in + x$lt_dlno_uscore) $1 ;; + x$lt_dlneed_uscore) $2 ;; + x$lt_unknown|x*) $3 ;; + esac + else : + # compilation failed + $3 + fi +fi +rm -fr conftest* +])# _LT_AC_TRY_DLOPEN_SELF + +# AC_LIBTOOL_DLOPEN_SELF +# ------------------- +AC_DEFUN([AC_LIBTOOL_DLOPEN_SELF], +[if test "x$enable_dlopen" != xyes; then + enable_dlopen=unknown + enable_dlopen_self=unknown + enable_dlopen_self_static=unknown +else + lt_cv_dlopen=no + lt_cv_dlopen_libs= + + case $host_os in + beos*) + lt_cv_dlopen="load_add_on" + lt_cv_dlopen_libs= + lt_cv_dlopen_self=yes + ;; + + cygwin* | mingw* | pw32*) + lt_cv_dlopen="LoadLibrary" + lt_cv_dlopen_libs= + ;; + + *) + AC_CHECK_FUNC([shl_load], + [lt_cv_dlopen="shl_load"], + [AC_CHECK_LIB([dld], [shl_load], + [lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-dld"], + [AC_CHECK_FUNC([dlopen], + [lt_cv_dlopen="dlopen"], + [AC_CHECK_LIB([dl], [dlopen], + [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"], + [AC_CHECK_LIB([svld], [dlopen], + [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld"], + [AC_CHECK_LIB([dld], [dld_link], + [lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-dld"]) + ]) + ]) + ]) + ]) + ]) + ;; + esac + + if test "x$lt_cv_dlopen" != xno; then + enable_dlopen=yes + else + enable_dlopen=no + fi + + case $lt_cv_dlopen in + dlopen) + save_CPPFLAGS="$CPPFLAGS" + AC_REQUIRE([_LT_AC_CHECK_DLFCN])dnl + test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" + + save_LDFLAGS="$LDFLAGS" + eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" + + save_LIBS="$LIBS" + LIBS="$lt_cv_dlopen_libs $LIBS" + + AC_CACHE_CHECK([whether a program can dlopen itself], + lt_cv_dlopen_self, [dnl + _LT_AC_TRY_DLOPEN_SELF( + lt_cv_dlopen_self=yes, lt_cv_dlopen_self=yes, + lt_cv_dlopen_self=no, lt_cv_dlopen_self=cross) + ]) + + if test "x$lt_cv_dlopen_self" = xyes; then + LDFLAGS="$LDFLAGS $link_static_flag" + AC_CACHE_CHECK([whether a statically linked program can dlopen itself], + lt_cv_dlopen_self_static, [dnl + _LT_AC_TRY_DLOPEN_SELF( + lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=yes, + lt_cv_dlopen_self_static=no, lt_cv_dlopen_self_static=cross) + ]) + fi + + CPPFLAGS="$save_CPPFLAGS" + LDFLAGS="$save_LDFLAGS" + LIBS="$save_LIBS" + ;; + esac + + case $lt_cv_dlopen_self in + yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; + *) enable_dlopen_self=unknown ;; + esac + + case $lt_cv_dlopen_self_static in + yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; + *) enable_dlopen_self_static=unknown ;; + esac +fi +])# AC_LIBTOOL_DLOPEN_SELF + +AC_DEFUN([_LT_AC_LTCONFIG_HACK], +[AC_REQUIRE([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE])dnl +# Sed substitution that helps us do robust quoting. It backslashifies +# metacharacters that are still active within double-quoted strings. +Xsed='sed -e s/^X//' +sed_quote_subst='s/\([[\\"\\`$\\\\]]\)/\\\1/g' + +# Same as above, but do not quote variable references. +double_quote_subst='s/\([[\\"\\`\\\\]]\)/\\\1/g' + +# Sed substitution to delay expansion of an escaped shell variable in a +# double_quote_subst'ed string. +delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' + +# Constants: +rm="rm -f" + +# Global variables: +default_ofile=libtool +can_build_shared=yes + +# All known linkers require a `.a' archive for static linking (except M$VC, +# which needs '.lib'). +libext=a +ltmain="$ac_aux_dir/ltmain.sh" +ofile="$default_ofile" +with_gnu_ld="$lt_cv_prog_gnu_ld" +need_locks="$enable_libtool_lock" + +old_CC="$CC" +old_CFLAGS="$CFLAGS" + +# Set sane defaults for various variables +test -z "$AR" && AR=ar +test -z "$AR_FLAGS" && AR_FLAGS=cru +test -z "$AS" && AS=as +test -z "$CC" && CC=cc +test -z "$DLLTOOL" && DLLTOOL=dlltool +test -z "$LD" && LD=ld +test -z "$LN_S" && LN_S="ln -s" +test -z "$MAGIC_CMD" && MAGIC_CMD=file +test -z "$NM" && NM=nm +test -z "$OBJDUMP" && OBJDUMP=objdump +test -z "$RANLIB" && RANLIB=: +test -z "$STRIP" && STRIP=: +test -z "$ac_objext" && ac_objext=o + +if test x"$host" != x"$build"; then + ac_tool_prefix=${host_alias}- +else + ac_tool_prefix= +fi + +# Transform linux* to *-*-linux-gnu*, to support old configure scripts. +case $host_os in +linux-gnu*) ;; +linux*) host=`echo $host | sed 's/^\(.*-.*-linux\)\(.*\)$/\1-gnu\2/'` +esac + +case $host_os in +aix3*) + # AIX sometimes has problems with the GCC collect2 program. For some + # reason, if we set the COLLECT_NAMES environment variable, the problems + # vanish in a puff of smoke. + if test "X${COLLECT_NAMES+set}" != Xset; then + COLLECT_NAMES= + export COLLECT_NAMES + fi + ;; +esac + +# Determine commands to create old-style static archives. +old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs' +old_postinstall_cmds='chmod 644 $oldlib' +old_postuninstall_cmds= + +if test -n "$RANLIB"; then + case $host_os in + openbsd*) + old_postinstall_cmds="\$RANLIB -t \$oldlib~$old_postinstall_cmds" + ;; + *) + old_postinstall_cmds="\$RANLIB \$oldlib~$old_postinstall_cmds" + ;; + esac + old_archive_cmds="$old_archive_cmds~\$RANLIB \$oldlib" +fi + +# Allow CC to be a program name with arguments. +set dummy $CC +compiler="[$]2" + +## FIXME: this should be a separate macro +## +AC_MSG_CHECKING([for objdir]) +rm -f .libs 2>/dev/null +mkdir .libs 2>/dev/null +if test -d .libs; then + objdir=.libs +else + # MS-DOS does not allow filenames that begin with a dot. + objdir=_libs +fi +rmdir .libs 2>/dev/null +AC_MSG_RESULT($objdir) +## +## END FIXME + + +## FIXME: this should be a separate macro +## +AC_ARG_WITH(pic, +[ --with-pic try to use only PIC/non-PIC objects [default=use both]], +pic_mode="$withval", pic_mode=default) +test -z "$pic_mode" && pic_mode=default + +# We assume here that the value for lt_cv_prog_cc_pic will not be cached +# in isolation, and that seeing it set (from the cache) indicates that +# the associated values are set (in the cache) correctly too. +AC_MSG_CHECKING([for $compiler option to produce PIC]) +AC_CACHE_VAL(lt_cv_prog_cc_pic, +[ lt_cv_prog_cc_pic= + lt_cv_prog_cc_shlib= + lt_cv_prog_cc_wl= + lt_cv_prog_cc_static= + lt_cv_prog_cc_no_builtin= + lt_cv_prog_cc_can_build_shared=$can_build_shared + + if test "$GCC" = yes; then + lt_cv_prog_cc_wl='-Wl,' + lt_cv_prog_cc_static='-static' + + case $host_os in + aix*) + # Below there is a dirty hack to force normal static linking with -ldl + # The problem is because libdl dynamically linked with both libc and + # libC (AIX C++ library), which obviously doesn't included in libraries + # list by gcc. This cause undefined symbols with -static flags. + # This hack allows C programs to be linked with "-static -ldl", but + # not sure about C++ programs. + lt_cv_prog_cc_static="$lt_cv_prog_cc_static ${lt_cv_prog_cc_wl}-lC" + ;; + amigaos*) + # FIXME: we need at least 68020 code to build shared libraries, but + # adding the `-m68020' flag to GCC prevents building anything better, + # like `-m68040'. + lt_cv_prog_cc_pic='-m68020 -resident32 -malways-restore-a4' + ;; + beos* | irix5* | irix6* | osf3* | osf4* | osf5*) + # PIC is the default for these OSes. + ;; + darwin* | rhapsody*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + lt_cv_prog_cc_pic='-fno-common' + ;; + cygwin* | mingw* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + lt_cv_prog_cc_pic='-DDLL_EXPORT' + ;; + sysv4*MP*) + if test -d /usr/nec; then + lt_cv_prog_cc_pic=-Kconform_pic + fi + ;; + *) + lt_cv_prog_cc_pic='-fPIC' + ;; + esac + else + # PORTME Check for PIC flags for the system compiler. + case $host_os in + aix3* | aix4* | aix5*) + lt_cv_prog_cc_wl='-Wl,' + # All AIX code is PIC. + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + lt_cv_prog_cc_static='-Bstatic' + else + lt_cv_prog_cc_static='-bnso -bI:/lib/syscalls.exp' + fi + ;; + + hpux9* | hpux10* | hpux11*) + # Is there a better lt_cv_prog_cc_static that works with the bundled CC? + lt_cv_prog_cc_wl='-Wl,' + lt_cv_prog_cc_static="${lt_cv_prog_cc_wl}-a ${lt_cv_prog_cc_wl}archive" + lt_cv_prog_cc_pic='+Z' + ;; + + irix5* | irix6*) + lt_cv_prog_cc_wl='-Wl,' + lt_cv_prog_cc_static='-non_shared' + # PIC (with -KPIC) is the default. + ;; + + cygwin* | mingw* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + lt_cv_prog_cc_pic='-DDLL_EXPORT' + ;; + + newsos6) + lt_cv_prog_cc_pic='-KPIC' + lt_cv_prog_cc_static='-Bstatic' + ;; + + osf3* | osf4* | osf5*) + # All OSF/1 code is PIC. + lt_cv_prog_cc_wl='-Wl,' + lt_cv_prog_cc_static='-non_shared' + ;; + + sco3.2v5*) + lt_cv_prog_cc_pic='-Kpic' + lt_cv_prog_cc_static='-dn' + lt_cv_prog_cc_shlib='-belf' + ;; + + solaris*) + lt_cv_prog_cc_pic='-KPIC' + lt_cv_prog_cc_static='-Bstatic' + lt_cv_prog_cc_wl='-Wl,' + ;; + + sunos4*) + lt_cv_prog_cc_pic='-PIC' + lt_cv_prog_cc_static='-Bstatic' + lt_cv_prog_cc_wl='-Qoption ld ' + ;; + + sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) + lt_cv_prog_cc_pic='-KPIC' + lt_cv_prog_cc_static='-Bstatic' + if test "x$host_vendor" = xsni; then + lt_cv_prog_cc_wl='-LD' + else + lt_cv_prog_cc_wl='-Wl,' + fi + ;; + + uts4*) + lt_cv_prog_cc_pic='-pic' + lt_cv_prog_cc_static='-Bstatic' + ;; + + sysv4*MP*) + if test -d /usr/nec ;then + lt_cv_prog_cc_pic='-Kconform_pic' + lt_cv_prog_cc_static='-Bstatic' + fi + ;; + + *) + lt_cv_prog_cc_can_build_shared=no + ;; + esac + fi +]) +if test -z "$lt_cv_prog_cc_pic"; then + AC_MSG_RESULT([none]) +else + AC_MSG_RESULT([$lt_cv_prog_cc_pic]) + + # Check to make sure the pic_flag actually works. + AC_MSG_CHECKING([if $compiler PIC flag $lt_cv_prog_cc_pic works]) + AC_CACHE_VAL(lt_cv_prog_cc_pic_works, [dnl + save_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS $lt_cv_prog_cc_pic -DPIC" + AC_TRY_COMPILE([], [], [dnl + case $host_os in + hpux9* | hpux10* | hpux11*) + # On HP-UX, both CC and GCC only warn that PIC is supported... then + # they create non-PIC objects. So, if there were any warnings, we + # assume that PIC is not supported. + if test -s conftest.err; then + lt_cv_prog_cc_pic_works=no + else + lt_cv_prog_cc_pic_works=yes + fi + ;; + *) + lt_cv_prog_cc_pic_works=yes + ;; + esac + ], [dnl + lt_cv_prog_cc_pic_works=no + ]) + CFLAGS="$save_CFLAGS" + ]) + + if test "X$lt_cv_prog_cc_pic_works" = Xno; then + lt_cv_prog_cc_pic= + lt_cv_prog_cc_can_build_shared=no + else + lt_cv_prog_cc_pic=" $lt_cv_prog_cc_pic" + fi + + AC_MSG_RESULT([$lt_cv_prog_cc_pic_works]) +fi +## +## END FIXME + +# Check for any special shared library compilation flags. +if test -n "$lt_cv_prog_cc_shlib"; then + AC_MSG_WARN([\`$CC' requires \`$lt_cv_prog_cc_shlib' to build shared libraries]) + if echo "$old_CC $old_CFLAGS " | egrep -e "[[ ]]$lt_cv_prog_cc_shlib[[ ]]" >/dev/null; then : + else + AC_MSG_WARN([add \`$lt_cv_prog_cc_shlib' to the CC or CFLAGS env variable and reconfigure]) + lt_cv_prog_cc_can_build_shared=no + fi +fi + +## FIXME: this should be a separate macro +## +AC_MSG_CHECKING([if $compiler static flag $lt_cv_prog_cc_static works]) +AC_CACHE_VAL([lt_cv_prog_cc_static_works], [dnl + lt_cv_prog_cc_static_works=no + save_LDFLAGS="$LDFLAGS" + LDFLAGS="$LDFLAGS $lt_cv_prog_cc_static" + AC_TRY_LINK([], [], [lt_cv_prog_cc_static_works=yes]) + LDFLAGS="$save_LDFLAGS" +]) + +# Belt *and* braces to stop my trousers falling down: +test "X$lt_cv_prog_cc_static_works" = Xno && lt_cv_prog_cc_static= +AC_MSG_RESULT([$lt_cv_prog_cc_static_works]) + +pic_flag="$lt_cv_prog_cc_pic" +special_shlib_compile_flags="$lt_cv_prog_cc_shlib" +wl="$lt_cv_prog_cc_wl" +link_static_flag="$lt_cv_prog_cc_static" +no_builtin_flag="$lt_cv_prog_cc_no_builtin" +can_build_shared="$lt_cv_prog_cc_can_build_shared" +## +## END FIXME + + +## FIXME: this should be a separate macro +## +# Check to see if options -o and -c are simultaneously supported by compiler +AC_MSG_CHECKING([if $compiler supports -c -o file.$ac_objext]) +AC_CACHE_VAL([lt_cv_compiler_c_o], [ +$rm -r conftest 2>/dev/null +mkdir conftest +cd conftest +echo "int some_variable = 0;" > conftest.$ac_ext +mkdir out +# According to Tom Tromey, Ian Lance Taylor reported there are C compilers +# that will create temporary files in the current directory regardless of +# the output directory. Thus, making CWD read-only will cause this test +# to fail, enabling locking or at least warning the user not to do parallel +# builds. +chmod -w . +save_CFLAGS="$CFLAGS" +CFLAGS="$CFLAGS -o out/conftest2.$ac_objext" +compiler_c_o=no +if { (eval echo configure:__oline__: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>out/conftest.err; } && test -s out/conftest2.$ac_objext; then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s out/conftest.err; then + lt_cv_compiler_c_o=no + else + lt_cv_compiler_c_o=yes + fi +else + # Append any errors to the config.log. + cat out/conftest.err 1>&AC_FD_CC + lt_cv_compiler_c_o=no +fi +CFLAGS="$save_CFLAGS" +chmod u+w . +$rm conftest* out/* +rmdir out +cd .. +rmdir conftest +$rm -r conftest 2>/dev/null +]) +compiler_c_o=$lt_cv_compiler_c_o +AC_MSG_RESULT([$compiler_c_o]) + +if test x"$compiler_c_o" = x"yes"; then + # Check to see if we can write to a .lo + AC_MSG_CHECKING([if $compiler supports -c -o file.lo]) + AC_CACHE_VAL([lt_cv_compiler_o_lo], [ + lt_cv_compiler_o_lo=no + save_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS -c -o conftest.lo" + save_objext="$ac_objext" + ac_objext=lo + AC_TRY_COMPILE([], [int some_variable = 0;], [dnl + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s conftest.err; then + lt_cv_compiler_o_lo=no + else + lt_cv_compiler_o_lo=yes + fi + ]) + ac_objext="$save_objext" + CFLAGS="$save_CFLAGS" + ]) + compiler_o_lo=$lt_cv_compiler_o_lo + AC_MSG_RESULT([$compiler_o_lo]) +else + compiler_o_lo=no +fi +## +## END FIXME + +## FIXME: this should be a separate macro +## +# Check to see if we can do hard links to lock some files if needed +hard_links="nottested" +if test "$compiler_c_o" = no && test "$need_locks" != no; then + # do not overwrite the value of need_locks provided by the user + AC_MSG_CHECKING([if we can lock with hard links]) + hard_links=yes + $rm conftest* + ln conftest.a conftest.b 2>/dev/null && hard_links=no + touch conftest.a + ln conftest.a conftest.b 2>&5 || hard_links=no + ln conftest.a conftest.b 2>/dev/null && hard_links=no + AC_MSG_RESULT([$hard_links]) + if test "$hard_links" = no; then + AC_MSG_WARN([\`$CC' does not support \`-c -o', so \`make -j' may be unsafe]) + need_locks=warn + fi +else + need_locks=no +fi +## +## END FIXME + +## FIXME: this should be a separate macro +## +if test "$GCC" = yes; then + # Check to see if options -fno-rtti -fno-exceptions are supported by compiler + AC_MSG_CHECKING([if $compiler supports -fno-rtti -fno-exceptions]) + echo "int some_variable = 0;" > conftest.$ac_ext + save_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS -fno-rtti -fno-exceptions -c conftest.$ac_ext" + compiler_rtti_exceptions=no + AC_TRY_COMPILE([], [int some_variable = 0;], [dnl + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s conftest.err; then + compiler_rtti_exceptions=no + else + compiler_rtti_exceptions=yes + fi + ]) + CFLAGS="$save_CFLAGS" + AC_MSG_RESULT([$compiler_rtti_exceptions]) + + if test "$compiler_rtti_exceptions" = "yes"; then + no_builtin_flag=' -fno-builtin -fno-rtti -fno-exceptions' + else + no_builtin_flag=' -fno-builtin' + fi +fi +## +## END FIXME + +## FIXME: this should be a separate macro +## +# See if the linker supports building shared libraries. +AC_MSG_CHECKING([whether the linker ($LD) supports shared libraries]) + +allow_undefined_flag= +no_undefined_flag= +need_lib_prefix=unknown +need_version=unknown +# when you set need_version to no, make sure it does not cause -set_version +# flags to be left without arguments +archive_cmds= +archive_expsym_cmds= +old_archive_from_new_cmds= +old_archive_from_expsyms_cmds= +export_dynamic_flag_spec= +whole_archive_flag_spec= +thread_safe_flag_spec= +hardcode_into_libs=no +hardcode_libdir_flag_spec= +hardcode_libdir_separator= +hardcode_direct=no +hardcode_minus_L=no +hardcode_shlibpath_var=unsupported +runpath_var= +link_all_deplibs=unknown +always_export_symbols=no +export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | sed '\''s/.* //'\'' | sort | uniq > $export_symbols' +# include_expsyms should be a list of space-separated symbols to be *always* +# included in the symbol list +include_expsyms= +# exclude_expsyms can be an egrep regular expression of symbols to exclude +# it will be wrapped by ` (' and `)$', so one must not match beginning or +# end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', +# as well as any symbol that contains `d'. +exclude_expsyms="_GLOBAL_OFFSET_TABLE_" +# Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out +# platforms (ab)use it in PIC code, but their linkers get confused if +# the symbol is explicitly referenced. Since portable code cannot +# rely on this symbol name, it's probably fine to never include it in +# preloaded symbol tables. +extract_expsyms_cmds= + +case $host_os in +cygwin* | mingw* | pw32*) + # FIXME: the MSVC++ port hasn't been tested in a loooong time + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + if test "$GCC" != yes; then + with_gnu_ld=no + fi + ;; +openbsd*) + with_gnu_ld=no + ;; +esac + +ld_shlibs=yes +if test "$with_gnu_ld" = yes; then + # If archive_cmds runs LD, not CC, wlarc should be empty + wlarc='${wl}' + + # See if GNU ld supports shared libraries. + case $host_os in + aix3* | aix4* | aix5*) + # On AIX, the GNU linker is very broken + # Note:Check GNU linker on AIX 5-IA64 when/if it becomes available. + ld_shlibs=no + cat <&2 + +*** Warning: the GNU linker, at least up to release 2.9.1, is reported +*** to be unable to reliably create shared libraries on AIX. +*** Therefore, libtool is disabling shared libraries support. If you +*** really care for shared libraries, you may want to modify your PATH +*** so that a non-GNU linker is found, and then restart. + +EOF + ;; + + amigaos*) + archive_cmds='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_minus_L=yes + + # Samuel A. Falvo II reports + # that the semantics of dynamic libraries on AmigaOS, at least up + # to version 4, is to share data among multiple programs linked + # with the same dynamic library. Since this doesn't match the + # behavior of shared libraries on other platforms, we can use + # them. + ld_shlibs=no + ;; + + beos*) + if $LD --help 2>&1 | egrep ': supported targets:.* elf' > /dev/null; then + allow_undefined_flag=unsupported + # Joseph Beckenbach says some releases of gcc + # support --undefined. This deserves some investigation. FIXME + archive_cmds='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + else + ld_shlibs=no + fi + ;; + + cygwin* | mingw* | pw32*) + # hardcode_libdir_flag_spec is actually meaningless, as there is + # no search path for DLLs. + hardcode_libdir_flag_spec='-L$libdir' + allow_undefined_flag=unsupported + always_export_symbols=yes + + extract_expsyms_cmds='test -f $output_objdir/impgen.c || \ + sed -e "/^# \/\* impgen\.c starts here \*\//,/^# \/\* impgen.c ends here \*\// { s/^# //;s/^# *$//; p; }" -e d < $''0 > $output_objdir/impgen.c~ + test -f $output_objdir/impgen.exe || (cd $output_objdir && \ + if test "x$HOST_CC" != "x" ; then $HOST_CC -o impgen impgen.c ; \ + else $CC -o impgen impgen.c ; fi)~ + $output_objdir/impgen $dir/$soroot > $output_objdir/$soname-def' + + old_archive_from_expsyms_cmds='$DLLTOOL --as=$AS --dllname $soname --def $output_objdir/$soname-def --output-lib $output_objdir/$newlib' + + # cygwin and mingw dlls have different entry points and sets of symbols + # to exclude. + # FIXME: what about values for MSVC? + dll_entry=__cygwin_dll_entry@12 + dll_exclude_symbols=DllMain@12,_cygwin_dll_entry@12,_cygwin_noncygwin_dll_entry@12~ + case $host_os in + mingw*) + # mingw values + dll_entry=_DllMainCRTStartup@12 + dll_exclude_symbols=DllMain@12,DllMainCRTStartup@12,DllEntryPoint@12~ + ;; + esac + + # mingw and cygwin differ, and it's simplest to just exclude the union + # of the two symbol sets. + dll_exclude_symbols=DllMain@12,_cygwin_dll_entry@12,_cygwin_noncygwin_dll_entry@12,DllMainCRTStartup@12,DllEntryPoint@12 + + # recent cygwin and mingw systems supply a stub DllMain which the user + # can override, but on older systems we have to supply one (in ltdll.c) + if test "x$lt_cv_need_dllmain" = "xyes"; then + ltdll_obj='$output_objdir/$soname-ltdll.'"$ac_objext " + ltdll_cmds='test -f $output_objdir/$soname-ltdll.c || sed -e "/^# \/\* ltdll\.c starts here \*\//,/^# \/\* ltdll.c ends here \*\// { s/^# //; p; }" -e d < $''0 > $output_objdir/$soname-ltdll.c~ + test -f $output_objdir/$soname-ltdll.$ac_objext || (cd $output_objdir && $CC -c $soname-ltdll.c)~' + else + ltdll_obj= + ltdll_cmds= + fi + + # Extract the symbol export list from an `--export-all' def file, + # then regenerate the def file from the symbol export list, so that + # the compiled dll only exports the symbol export list. + # Be careful not to strip the DATA tag left be newer dlltools. + export_symbols_cmds="$ltdll_cmds"' + $DLLTOOL --export-all --exclude-symbols '$dll_exclude_symbols' --output-def $output_objdir/$soname-def '$ltdll_obj'$libobjs $convenience~ + sed -e "1,/EXPORTS/d" -e "s/ @ [[0-9]]*//" -e "s/ *;.*$//" < $output_objdir/$soname-def > $export_symbols' + + # If the export-symbols file already is a .def file (1st line + # is EXPORTS), use it as is. + # If DATA tags from a recent dlltool are present, honour them! + archive_expsym_cmds='if test "x`head -1 $export_symbols`" = xEXPORTS; then + cp $export_symbols $output_objdir/$soname-def; + else + echo EXPORTS > $output_objdir/$soname-def; + _lt_hint=1; + cat $export_symbols | while read symbol; do + set dummy \$symbol; + case \[$]# in + 2) echo " \[$]2 @ \$_lt_hint ; " >> $output_objdir/$soname-def;; + *) echo " \[$]2 @ \$_lt_hint \[$]3 ; " >> $output_objdir/$soname-def;; + esac; + _lt_hint=`expr 1 + \$_lt_hint`; + done; + fi~ + '"$ltdll_cmds"' + $CC -Wl,--base-file,$output_objdir/$soname-base '$lt_cv_cc_dll_switch' -Wl,-e,'$dll_entry' -o $output_objdir/$soname '$ltdll_obj'$libobjs $deplibs $compiler_flags~ + $DLLTOOL --as=$AS --dllname $soname --exclude-symbols '$dll_exclude_symbols' --def $output_objdir/$soname-def --base-file $output_objdir/$soname-base --output-exp $output_objdir/$soname-exp~ + $CC -Wl,--base-file,$output_objdir/$soname-base $output_objdir/$soname-exp '$lt_cv_cc_dll_switch' -Wl,-e,'$dll_entry' -o $output_objdir/$soname '$ltdll_obj'$libobjs $deplibs $compiler_flags~ + $DLLTOOL --as=$AS --dllname $soname --exclude-symbols '$dll_exclude_symbols' --def $output_objdir/$soname-def --base-file $output_objdir/$soname-base --output-exp $output_objdir/$soname-exp --output-lib $output_objdir/$libname.dll.a~ + $CC $output_objdir/$soname-exp '$lt_cv_cc_dll_switch' -Wl,-e,'$dll_entry' -o $output_objdir/$soname '$ltdll_obj'$libobjs $deplibs $compiler_flags' + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' + wlarc= + else + archive_cmds='$CC -shared -nodefaultlibs $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds='$CC -shared -nodefaultlibs $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + fi + ;; + + solaris* | sysv5*) + if $LD -v 2>&1 | egrep 'BFD 2\.8' > /dev/null; then + ld_shlibs=no + cat <&2 + +*** Warning: The releases 2.8.* of the GNU linker cannot reliably +*** create shared libraries on Solaris systems. Therefore, libtool +*** is disabling shared libraries support. We urge you to upgrade GNU +*** binutils to release 2.9.1 or newer. Another option is to modify +*** your PATH or compiler configuration so that the native linker is +*** used, and then restart. + +EOF + elif $LD --help 2>&1 | egrep ': supported targets:.* elf' > /dev/null; then + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + ld_shlibs=no + fi + ;; + + sunos4*) + archive_cmds='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' + wlarc= + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + *) + if $LD --help 2>&1 | egrep ': supported targets:.* elf' > /dev/null; then + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + ld_shlibs=no + fi + ;; + esac + + if test "$ld_shlibs" = yes; then + runpath_var=LD_RUN_PATH + hardcode_libdir_flag_spec='${wl}--rpath ${wl}$libdir' + export_dynamic_flag_spec='${wl}--export-dynamic' + case $host_os in + cygwin* | mingw* | pw32*) + # dlltool doesn't understand --whole-archive et. al. + whole_archive_flag_spec= + ;; + *) + # ancient GNU ld didn't support --whole-archive et. al. + if $LD --help 2>&1 | egrep 'no-whole-archive' > /dev/null; then + whole_archive_flag_spec="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + else + whole_archive_flag_spec= + fi + ;; + esac + fi +else + # PORTME fill in a description of your system's linker (not GNU ld) + case $host_os in + aix3*) + allow_undefined_flag=unsupported + always_export_symbols=yes + archive_expsym_cmds='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' + # Note: this linker hardcodes the directories in LIBPATH if there + # are no directories specified by -L. + hardcode_minus_L=yes + if test "$GCC" = yes && test -z "$link_static_flag"; then + # Neither direct hardcoding nor static linking is supported with a + # broken collect2. + hardcode_direct=unsupported + fi + ;; + + aix4* | aix5*) + if test "$host_cpu" = ia64; then + # On IA64, the linker does run time linking by default, so we don't + # have to do anything special. + aix_use_runtimelinking=no + exp_sym_flag='-Bexport' + no_entry_flag="" + else + aix_use_runtimelinking=no + + # Test if we are trying to use run time linking or normal + # AIX style linking. If -brtl is somewhere in LDFLAGS, we + # need to do runtime linking. + case $host_os in aix4.[[23]]|aix4.[[23]].*|aix5*) + for ld_flag in $LDFLAGS; do + if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then + aix_use_runtimelinking=yes + break + fi + done + esac + + exp_sym_flag='-bexport' + no_entry_flag='-bnoentry' + fi + + # When large executables or shared objects are built, AIX ld can + # have problems creating the table of contents. If linking a library + # or program results in "error TOC overflow" add -mminimal-toc to + # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not + # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. + + hardcode_direct=yes + archive_cmds='' + hardcode_libdir_separator=':' + if test "$GCC" = yes; then + case $host_os in aix4.[[012]]|aix4.[[012]].*) + collect2name=`${CC} -print-prog-name=collect2` + if test -f "$collect2name" && \ + strings "$collect2name" | grep resolve_lib_name >/dev/null + then + # We have reworked collect2 + hardcode_direct=yes + else + # We have old collect2 + hardcode_direct=unsupported + # It fails to find uninstalled libraries when the uninstalled + # path is not listed in the libpath. Setting hardcode_minus_L + # to unsupported forces relinking + hardcode_minus_L=yes + hardcode_libdir_flag_spec='-L$libdir' + hardcode_libdir_separator= + fi + esac + + shared_flag='-shared' + else + # not using gcc + if test "$host_cpu" = ia64; then + shared_flag='${wl}-G' + else + if test "$aix_use_runtimelinking" = yes; then + shared_flag='${wl}-G' + else + shared_flag='${wl}-bM:SRE' + fi + fi + fi + + # It seems that -bexpall can do strange things, so it is better to + # generate a list of symbols to export. + always_export_symbols=yes + if test "$aix_use_runtimelinking" = yes; then + # Warning - without using the other runtime loading flags (-brtl), + # -berok will link without error, but may produce a broken library. + allow_undefined_flag='-berok' + hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:/usr/lib:/lib' + archive_expsym_cmds="\$CC"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols $shared_flag" + else + if test "$host_cpu" = ia64; then + hardcode_libdir_flag_spec='${wl}-R $libdir:/usr/lib:/lib' + allow_undefined_flag="-z nodefs" + archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname ${wl}-h$soname $libobjs $deplibs $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols" + else + hardcode_libdir_flag_spec='${wl}-bnolibpath ${wl}-blibpath:$libdir:/usr/lib:/lib' + # Warning - without using the other run time loading flags, + # -berok will link without error, but may produce a broken library. + allow_undefined_flag='${wl}-berok' + # This is a bit strange, but is similar to how AIX traditionally builds + # it's shared libraries. + archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${allow_undefined_flag} '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols"' ~$AR -crlo $objdir/$libname$release.a $objdir/$soname' + fi + fi + ;; + + amigaos*) + archive_cmds='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_minus_L=yes + # see comment about different semantics on the GNU ld section + ld_shlibs=no + ;; + + cygwin* | mingw* | pw32*) + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + # hardcode_libdir_flag_spec is actually meaningless, as there is + # no search path for DLLs. + hardcode_libdir_flag_spec=' ' + allow_undefined_flag=unsupported + # Tell ltmain to make .lib files, not .a files. + libext=lib + # FIXME: Setting linknames here is a bad hack. + archive_cmds='$CC -o $lib $libobjs $compiler_flags `echo "$deplibs" | sed -e '\''s/ -lc$//'\''` -link -dll~linknames=' + # The linker will automatically build a .lib file if we build a DLL. + old_archive_from_new_cmds='true' + # FIXME: Should let the user specify the lib program. + old_archive_cmds='lib /OUT:$oldlib$oldobjs$old_deplibs' + fix_srcfile_path='`cygpath -w "$srcfile"`' + ;; + + darwin* | rhapsody*) + case "$host_os" in + rhapsody* | darwin1.[[012]]) + allow_undefined_flag='-undefined suppress' + ;; + *) # Darwin 1.3 on + allow_undefined_flag='-flat_namespace -undefined suppress' + ;; + esac + # FIXME: Relying on posixy $() will cause problems for + # cross-compilation, but unfortunately the echo tests do not + # yet detect zsh echo's removal of \ escapes. + archive_cmds='$nonopt $(test "x$module" = xyes && echo -bundle || echo -dynamiclib) $allow_undefined_flag -o $lib $libobjs $deplibs$linker_flags -install_name $rpath/$soname $verstring' + # We need to add '_' to the symbols in $export_symbols first + #archive_expsym_cmds="$archive_cmds"' && strip -s $export_symbols' + hardcode_direct=yes + hardcode_shlibpath_var=no + whole_archive_flag_spec='-all_load $convenience' + ;; + + freebsd1*) + ld_shlibs=no + ;; + + # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor + # support. Future versions do this automatically, but an explicit c++rt0.o + # does not break anything, and helps significantly (at the cost of a little + # extra space). + freebsd2.2*) + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' + hardcode_libdir_flag_spec='-R$libdir' + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + # Unfortunately, older versions of FreeBSD 2 do not have this feature. + freebsd2*) + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=yes + hardcode_minus_L=yes + hardcode_shlibpath_var=no + ;; + + # FreeBSD 3 and greater uses gcc -shared to do shared libraries. + freebsd*) + archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags' + hardcode_libdir_flag_spec='-R$libdir' + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + hpux9* | hpux10* | hpux11*) + case $host_os in + hpux9*) archive_cmds='$rm $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' ;; + *) archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' ;; + esac + hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' + hardcode_libdir_separator=: + hardcode_direct=yes + hardcode_minus_L=yes # Not in the search PATH, but as the default + # location of the library. + export_dynamic_flag_spec='${wl}-E' + ;; + + irix5* | irix6*) + if test "$GCC" = yes; then + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + fi + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator=: + link_all_deplibs=yes + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out + else + archive_cmds='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF + fi + hardcode_libdir_flag_spec='-R$libdir' + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + newsos6) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=yes + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator=: + hardcode_shlibpath_var=no + ;; + + openbsd*) + hardcode_direct=yes + hardcode_shlibpath_var=no + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='${wl}-rpath,$libdir' + export_dynamic_flag_spec='${wl}-E' + else + case "$host_os" in + openbsd[[01]].* | openbsd2.[[0-7]] | openbsd2.[[0-7]].*) + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='-R$libdir' + ;; + *) + archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='${wl}-rpath,$libdir' + ;; + esac + fi + ;; + + os2*) + hardcode_libdir_flag_spec='-L$libdir' + hardcode_minus_L=yes + allow_undefined_flag=unsupported + archive_cmds='$echo "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$echo "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~$echo DATA >> $output_objdir/$libname.def~$echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~$echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' + old_archive_from_new_cmds='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' + ;; + + osf3*) + if test "$GCC" = yes; then + allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + allow_undefined_flag=' -expect_unresolved \*' + archive_cmds='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + fi + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator=: + ;; + + osf4* | osf5*) # as osf3* with the addition of -msym flag + if test "$GCC" = yes; then + allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + else + allow_undefined_flag=' -expect_unresolved \*' + archive_cmds='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + archive_expsym_cmds='for i in `cat $export_symbols`; do printf "-exported_symbol " >> $lib.exp; echo "\$i" >> $lib.exp; done; echo "-hidden">> $lib.exp~ + $LD -shared${allow_undefined_flag} -input $lib.exp $linker_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${objdir}/so_locations -o $lib~$rm $lib.exp' + + #Both c and cxx compiler support -rpath directly + hardcode_libdir_flag_spec='-rpath $libdir' + fi + hardcode_libdir_separator=: + ;; + + sco3.2v5*) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var=no + runpath_var=LD_RUN_PATH + hardcode_runpath_var=yes + export_dynamic_flag_spec='${wl}-Bexport' + ;; + + solaris*) + # gcc --version < 3.0 without binutils cannot create self contained + # shared libraries reliably, requiring libgcc.a to resolve some of + # the object symbols generated in some cases. Libraries that use + # assert need libgcc.a to resolve __eprintf, for example. Linking + # a copy of libgcc.a into every shared library to guarantee resolving + # such symbols causes other problems: According to Tim Van Holder + # , C++ libraries end up with a separate + # (to the application) exception stack for one thing. + no_undefined_flag=' -z defs' + if test "$GCC" = yes; then + case `$CC --version 2>/dev/null` in + [[12]].*) + cat <&2 + +*** Warning: Releases of GCC earlier than version 3.0 cannot reliably +*** create self contained shared libraries on Solaris systems, without +*** introducing a dependency on libgcc.a. Therefore, libtool is disabling +*** -no-undefined support, which will at least allow you to build shared +*** libraries. However, you may find that when you link such libraries +*** into an application without using GCC, you have to manually add +*** \`gcc --print-libgcc-file-name\` to the link command. We urge you to +*** upgrade to a newer version of GCC. Another option is to rebuild your +*** current GCC to use the GNU linker from GNU binutils 2.9.1 or newer. + +EOF + no_undefined_flag= + ;; + esac + fi + # $CC -shared without GNU ld will not create a library from C++ + # object files and a static libstdc++, better avoid it by now + archive_cmds='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' + archive_expsym_cmds='$echo "{ global:" > $lib.exp~cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp' + hardcode_libdir_flag_spec='-R$libdir' + hardcode_shlibpath_var=no + case $host_os in + solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; + *) # Supported since Solaris 2.6 (maybe 2.5.1?) + whole_archive_flag_spec='-z allextract$convenience -z defaultextract' ;; + esac + link_all_deplibs=yes + ;; + + sunos4*) + if test "x$host_vendor" = xsequent; then + # Use $CC to link under sequent, because it throws in some extra .o + # files that make .init and .fini sections work. + archive_cmds='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' + fi + hardcode_libdir_flag_spec='-L$libdir' + hardcode_direct=yes + hardcode_minus_L=yes + hardcode_shlibpath_var=no + ;; + + sysv4) + if test "x$host_vendor" = xsno; then + archive_cmds='$LD -G -Bsymbolic -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=yes # is this really true??? + else + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=no #Motorola manual says yes, but my tests say they lie + fi + runpath_var='LD_RUN_PATH' + hardcode_shlibpath_var=no + ;; + + sysv4.3*) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var=no + export_dynamic_flag_spec='-Bexport' + ;; + + sysv5*) + no_undefined_flag=' -z text' + # $CC -shared without GNU ld will not create a library from C++ + # object files and a static libstdc++, better avoid it by now + archive_cmds='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' + archive_expsym_cmds='$echo "{ global:" > $lib.exp~cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp' + hardcode_libdir_flag_spec= + hardcode_shlibpath_var=no + runpath_var='LD_RUN_PATH' + ;; + + uts4*) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_shlibpath_var=no + ;; + + dgux*) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_shlibpath_var=no + ;; + + sysv4*MP*) + if test -d /usr/nec; then + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var=no + runpath_var=LD_RUN_PATH + hardcode_runpath_var=yes + ld_shlibs=yes + fi + ;; + + sysv4.2uw2*) + archive_cmds='$LD -G -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=yes + hardcode_minus_L=no + hardcode_shlibpath_var=no + hardcode_runpath_var=yes + runpath_var=LD_RUN_PATH + ;; + + sysv5uw7* | unixware7*) + no_undefined_flag='${wl}-z ${wl}text' + if test "$GCC" = yes; then + archive_cmds='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds='$CC -G ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + fi + runpath_var='LD_RUN_PATH' + hardcode_shlibpath_var=no + ;; + + *) + ld_shlibs=no + ;; + esac +fi +AC_MSG_RESULT([$ld_shlibs]) +test "$ld_shlibs" = no && can_build_shared=no +## +## END FIXME + +## FIXME: this should be a separate macro +## +# Check hardcoding attributes. +AC_MSG_CHECKING([how to hardcode library paths into programs]) +hardcode_action= +if test -n "$hardcode_libdir_flag_spec" || \ + test -n "$runpath_var"; then + + # We can hardcode non-existant directories. + if test "$hardcode_direct" != no && + # If the only mechanism to avoid hardcoding is shlibpath_var, we + # have to relink, otherwise we might link with an installed library + # when we should be linking with a yet-to-be-installed one + ## test "$hardcode_shlibpath_var" != no && + test "$hardcode_minus_L" != no; then + # Linking always hardcodes the temporary library directory. + hardcode_action=relink + else + # We can link without hardcoding, and we can hardcode nonexisting dirs. + hardcode_action=immediate + fi +else + # We cannot hardcode anything, or else we can only hardcode existing + # directories. + hardcode_action=unsupported +fi +AC_MSG_RESULT([$hardcode_action]) +## +## END FIXME + +## FIXME: this should be a separate macro +## +striplib= +old_striplib= +AC_MSG_CHECKING([whether stripping libraries is possible]) +if test -n "$STRIP" && $STRIP -V 2>&1 | grep "GNU strip" >/dev/null; then + test -z "$old_striplib" && old_striplib="$STRIP --strip-debug" + test -z "$striplib" && striplib="$STRIP --strip-unneeded" + AC_MSG_RESULT([yes]) +else + AC_MSG_RESULT([no]) +fi +## +## END FIXME + +reload_cmds='$LD$reload_flag -o $output$reload_objs' +test -z "$deplibs_check_method" && deplibs_check_method=unknown + +## FIXME: this should be a separate macro +## +# PORTME Fill in your ld.so characteristics +AC_MSG_CHECKING([dynamic linker characteristics]) +library_names_spec= +libname_spec='lib$name' +soname_spec= +postinstall_cmds= +postuninstall_cmds= +finish_cmds= +finish_eval= +shlibpath_var= +shlibpath_overrides_runpath=unknown +version_type=none +dynamic_linker="$host_os ld.so" +sys_lib_dlsearch_path_spec="/lib /usr/lib" +sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" + +case $host_os in +aix3*) + version_type=linux + library_names_spec='${libname}${release}.so$versuffix $libname.a' + shlibpath_var=LIBPATH + + # AIX has no versioning support, so we append a major version to the name. + soname_spec='${libname}${release}.so$major' + ;; + +aix4* | aix5*) + version_type=linux + if test "$host_cpu" = ia64; then + # AIX 5 supports IA64 + library_names_spec='${libname}${release}.so$major ${libname}${release}.so$versuffix $libname.so' + shlibpath_var=LD_LIBRARY_PATH + else + # With GCC up to 2.95.x, collect2 would create an import file + # for dependence libraries. The import file would start with + # the line `#! .'. This would cause the generated library to + # depend on `.', always an invalid library. This was fixed in + # development snapshots of GCC prior to 3.0. + case $host_os in + aix4 | aix4.[[01]] | aix4.[[01]].*) + if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' + echo ' yes ' + echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then + : + else + can_build_shared=no + fi + ;; + esac + # AIX (on Power*) has no versioning support, so currently we can + # not hardcode correct soname into executable. Probably we can + # add versioning support to collect2, so additional links can + # be useful in future. + if test "$aix_use_runtimelinking" = yes; then + # If using run time linking (on AIX 4.2 or later) use lib.so + # instead of lib.a to let people know that these are not + # typical AIX shared libraries. + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + else + # We preserve .a as extension for shared libraries through AIX4.2 + # and later when we are not doing run time linking. + library_names_spec='${libname}${release}.a $libname.a' + soname_spec='${libname}${release}.so$major' + fi + shlibpath_var=LIBPATH + fi + ;; + +amigaos*) + library_names_spec='$libname.ixlibrary $libname.a' + # Create ${libname}_ixlibrary.a entries in /sys/libs. + finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([[^/]]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "(cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a)"; (cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a) || exit 1; done' + ;; + +beos*) + library_names_spec='${libname}.so' + dynamic_linker="$host_os ld.so" + shlibpath_var=LIBRARY_PATH + ;; + +bsdi4*) + version_type=linux + need_version=no + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" + sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" + export_dynamic_flag_spec=-rdynamic + # the default ld.so.conf also contains /usr/contrib/lib and + # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow + # libtool to hard-code these into programs + ;; + +cygwin* | mingw* | pw32*) + version_type=windows + need_version=no + need_lib_prefix=no + case $GCC,$host_os in + yes,cygwin*) + library_names_spec='$libname.dll.a' + soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | sed -e 's/[[.]]/-/g'`${versuffix}.dll' + postinstall_cmds='dlpath=`bash 2>&1 -c '\''. $dir/${file}i;echo \$dlname'\''`~ + dldir=$destdir/`dirname \$dlpath`~ + test -d \$dldir || mkdir -p \$dldir~ + $install_prog .libs/$dlname \$dldir/$dlname' + postuninstall_cmds='dldll=`bash 2>&1 -c '\''. $file; echo \$dlname'\''`~ + dlpath=$dir/\$dldll~ + $rm \$dlpath' + ;; + yes,mingw*) + library_names_spec='${libname}`echo ${release} | sed -e 's/[[.]]/-/g'`${versuffix}.dll' + sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | sed -e "s/^libraries://" -e "s/;/ /g"` + ;; + yes,pw32*) + library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | sed -e 's/[.]/-/g'`${versuffix}.dll' + ;; + *) + library_names_spec='${libname}`echo ${release} | sed -e 's/[[.]]/-/g'`${versuffix}.dll $libname.lib' + ;; + esac + dynamic_linker='Win32 ld.exe' + # FIXME: first we should search . and the directory the executable is in + shlibpath_var=PATH + ;; + +darwin* | rhapsody*) + dynamic_linker="$host_os dyld" + version_type=darwin + need_lib_prefix=no + need_version=no + # FIXME: Relying on posixy $() will cause problems for + # cross-compilation, but unfortunately the echo tests do not + # yet detect zsh echo's removal of \ escapes. + library_names_spec='${libname}${release}${versuffix}.$(test .$module = .yes && echo so || echo dylib) ${libname}${release}${major}.$(test .$module = .yes && echo so || echo dylib) ${libname}.$(test .$module = .yes && echo so || echo dylib)' + soname_spec='${libname}${release}${major}.$(test .$module = .yes && echo so || echo dylib)' + shlibpath_overrides_runpath=yes + shlibpath_var=DYLD_LIBRARY_PATH + ;; + +freebsd1*) + dynamic_linker=no + ;; + +freebsd*) + objformat=`test -x /usr/bin/objformat && /usr/bin/objformat || echo aout` + version_type=freebsd-$objformat + case $version_type in + freebsd-elf*) + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so $libname.so' + need_version=no + need_lib_prefix=no + ;; + freebsd-*) + library_names_spec='${libname}${release}.so$versuffix $libname.so$versuffix' + need_version=yes + ;; + esac + shlibpath_var=LD_LIBRARY_PATH + case $host_os in + freebsd2*) + shlibpath_overrides_runpath=yes + ;; + *) + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + esac + ;; + +gnu*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so${major} ${libname}.so' + soname_spec='${libname}${release}.so$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + ;; + +hpux9* | hpux10* | hpux11*) + # Give a soname corresponding to the major version so that dld.sl refuses to + # link against other versions. + dynamic_linker="$host_os dld.sl" + version_type=sunos + need_lib_prefix=no + need_version=no + shlibpath_var=SHLIB_PATH + shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH + library_names_spec='${libname}${release}.sl$versuffix ${libname}${release}.sl$major $libname.sl' + soname_spec='${libname}${release}.sl$major' + # HP-UX runs *really* slowly unless shared libraries are mode 555. + postinstall_cmds='chmod 555 $lib' + ;; + +irix5* | irix6*) + version_type=irix + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}.so$major' + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major ${libname}${release}.so $libname.so' + case $host_os in + irix5*) + libsuff= shlibsuff= + ;; + *) + case $LD in # libtool.m4 will add one of these switches to LD + *-32|*"-32 ") libsuff= shlibsuff= libmagic=32-bit;; + *-n32|*"-n32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; + *-64|*"-64 ") libsuff=64 shlibsuff=64 libmagic=64-bit;; + *) libsuff= shlibsuff= libmagic=never-match;; + esac + ;; + esac + shlibpath_var=LD_LIBRARY${shlibsuff}_PATH + shlibpath_overrides_runpath=no + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + ;; + +# No shared lib support for Linux oldld, aout, or coff. +linux-gnuoldld* | linux-gnuaout* | linux-gnucoff*) + dynamic_linker=no + ;; + +# This must be Linux ELF. +linux-gnu*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + # This implies no fast_install, which is unacceptable. + # Some rework will be needed to allow for fast_install + # before this can be enabled. + hardcode_into_libs=yes + + # We used to test for /lib/ld.so.1 and disable shared libraries on + # powerpc, because MkLinux only supported shared libraries with the + # GNU dynamic linker. Since this was broken with cross compilers, + # most powerpc-linux boxes support dynamic linking these days and + # people can always --disable-shared, the test was removed, and we + # assume the GNU/Linux dynamic linker is in use. + dynamic_linker='GNU/Linux ld.so' + ;; + +netbsd*) + version_type=sunos + need_lib_prefix=no + need_version=no + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + library_names_spec='${libname}${release}.so$versuffix ${libname}.so$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + dynamic_linker='NetBSD (a.out) ld.so' + else + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major ${libname}${release}.so ${libname}.so' + soname_spec='${libname}${release}.so$major' + dynamic_linker='NetBSD ld.elf_so' + fi + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + +newsos6) + version_type=linux + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +openbsd*) + version_type=sunos + need_lib_prefix=no + need_version=no + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + case "$host_os" in + openbsd2.[[89]] | openbsd2.[[89]].*) + shlibpath_overrides_runpath=no + ;; + *) + shlibpath_overrides_runpath=yes + ;; + esac + else + shlibpath_overrides_runpath=yes + fi + library_names_spec='${libname}${release}.so$versuffix ${libname}.so$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + shlibpath_var=LD_LIBRARY_PATH + ;; + +os2*) + libname_spec='$name' + need_lib_prefix=no + library_names_spec='$libname.dll $libname.a' + dynamic_linker='OS/2 ld.exe' + shlibpath_var=LIBPATH + ;; + +osf3* | osf4* | osf5*) + version_type=osf + need_version=no + soname_spec='${libname}${release}.so' + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so $libname.so' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" + sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" + ;; + +sco3.2v5*) + version_type=osf + soname_spec='${libname}${release}.so$major' + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + shlibpath_var=LD_LIBRARY_PATH + ;; + +solaris*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + # ldd complains unless libraries are executable + postinstall_cmds='chmod +x $lib' + ;; + +sunos4*) + version_type=sunos + library_names_spec='${libname}${release}.so$versuffix ${libname}.so$versuffix' + finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + if test "$with_gnu_ld" = yes; then + need_lib_prefix=no + fi + need_version=yes + ;; + +sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) + version_type=linux + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + shlibpath_var=LD_LIBRARY_PATH + case $host_vendor in + sni) + shlibpath_overrides_runpath=no + ;; + motorola) + need_lib_prefix=no + need_version=no + shlibpath_overrides_runpath=no + sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' + ;; + esac + ;; + +uts4*) + version_type=linux + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +dgux*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +sysv4*MP*) + if test -d /usr/nec ;then + version_type=linux + library_names_spec='$libname.so.$versuffix $libname.so.$major $libname.so' + soname_spec='$libname.so.$major' + shlibpath_var=LD_LIBRARY_PATH + fi + ;; + +*) + dynamic_linker=no + ;; +esac +AC_MSG_RESULT([$dynamic_linker]) +test "$dynamic_linker" = no && can_build_shared=no +## +## END FIXME + +## FIXME: this should be a separate macro +## +# Report the final consequences. +AC_MSG_CHECKING([if libtool supports shared libraries]) +AC_MSG_RESULT([$can_build_shared]) +## +## END FIXME + +## FIXME: this should be a separate macro +## +AC_MSG_CHECKING([whether to build shared libraries]) +test "$can_build_shared" = "no" && enable_shared=no + +# On AIX, shared libraries and static libraries use the same namespace, and +# are all built from PIC. +case "$host_os" in +aix3*) + test "$enable_shared" = yes && enable_static=no + if test -n "$RANLIB"; then + archive_cmds="$archive_cmds~\$RANLIB \$lib" + postinstall_cmds='$RANLIB $lib' + fi + ;; + +aix4*) + if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then + test "$enable_shared" = yes && enable_static=no + fi + ;; +esac +AC_MSG_RESULT([$enable_shared]) +## +## END FIXME + +## FIXME: this should be a separate macro +## +AC_MSG_CHECKING([whether to build static libraries]) +# Make sure either enable_shared or enable_static is yes. +test "$enable_shared" = yes || enable_static=yes +AC_MSG_RESULT([$enable_static]) +## +## END FIXME + +if test "$hardcode_action" = relink; then + # Fast installation is not supported + enable_fast_install=no +elif test "$shlibpath_overrides_runpath" = yes || + test "$enable_shared" = no; then + # Fast installation is not necessary + enable_fast_install=needless +fi + +variables_saved_for_relink="PATH $shlibpath_var $runpath_var" +if test "$GCC" = yes; then + variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" +fi + +AC_LIBTOOL_DLOPEN_SELF + +## FIXME: this should be a separate macro +## +if test "$enable_shared" = yes && test "$GCC" = yes; then + case $archive_cmds in + *'~'*) + # FIXME: we may have to deal with multi-command sequences. + ;; + '$CC '*) + # Test whether the compiler implicitly links with -lc since on some + # systems, -lgcc has to come before -lc. If gcc already passes -lc + # to ld, don't add -lc before -lgcc. + AC_MSG_CHECKING([whether -lc should be explicitly linked in]) + AC_CACHE_VAL([lt_cv_archive_cmds_need_lc], + [$rm conftest* + echo 'static int dummy;' > conftest.$ac_ext + + if AC_TRY_EVAL(ac_compile); then + soname=conftest + lib=conftest + libobjs=conftest.$ac_objext + deplibs= + wl=$lt_cv_prog_cc_wl + compiler_flags=-v + linker_flags=-v + verstring= + output_objdir=. + libname=conftest + save_allow_undefined_flag=$allow_undefined_flag + allow_undefined_flag= + if AC_TRY_EVAL(archive_cmds 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1) + then + lt_cv_archive_cmds_need_lc=no + else + lt_cv_archive_cmds_need_lc=yes + fi + allow_undefined_flag=$save_allow_undefined_flag + else + cat conftest.err 1>&5 + fi]) + AC_MSG_RESULT([$lt_cv_archive_cmds_need_lc]) + ;; + esac +fi +need_lc=${lt_cv_archive_cmds_need_lc-yes} +## +## END FIXME + +## FIXME: this should be a separate macro +## +# The second clause should only fire when bootstrapping the +# libtool distribution, otherwise you forgot to ship ltmain.sh +# with your package, and you will get complaints that there are +# no rules to generate ltmain.sh. +if test -f "$ltmain"; then + : +else + # If there is no Makefile yet, we rely on a make rule to execute + # `config.status --recheck' to rerun these tests and create the + # libtool script then. + test -f Makefile && make "$ltmain" +fi + +if test -f "$ltmain"; then + trap "$rm \"${ofile}T\"; exit 1" 1 2 15 + $rm -f "${ofile}T" + + echo creating $ofile + + # Now quote all the things that may contain metacharacters while being + # careful not to overquote the AC_SUBSTed values. We take copies of the + # variables and quote the copies for generation of the libtool script. + for var in echo old_CC old_CFLAGS \ + AR AR_FLAGS CC LD LN_S NM SHELL \ + reload_flag reload_cmds wl \ + pic_flag link_static_flag no_builtin_flag export_dynamic_flag_spec \ + thread_safe_flag_spec whole_archive_flag_spec libname_spec \ + library_names_spec soname_spec \ + RANLIB old_archive_cmds old_archive_from_new_cmds old_postinstall_cmds \ + old_postuninstall_cmds archive_cmds archive_expsym_cmds postinstall_cmds \ + postuninstall_cmds extract_expsyms_cmds old_archive_from_expsyms_cmds \ + old_striplib striplib file_magic_cmd export_symbols_cmds \ + deplibs_check_method allow_undefined_flag no_undefined_flag \ + finish_cmds finish_eval global_symbol_pipe global_symbol_to_cdecl \ + global_symbol_to_c_name_address \ + hardcode_libdir_flag_spec hardcode_libdir_separator \ + sys_lib_search_path_spec sys_lib_dlsearch_path_spec \ + compiler_c_o compiler_o_lo need_locks exclude_expsyms include_expsyms; do + + case $var in + reload_cmds | old_archive_cmds | old_archive_from_new_cmds | \ + old_postinstall_cmds | old_postuninstall_cmds | \ + export_symbols_cmds | archive_cmds | archive_expsym_cmds | \ + extract_expsyms_cmds | old_archive_from_expsyms_cmds | \ + postinstall_cmds | postuninstall_cmds | \ + finish_cmds | sys_lib_search_path_spec | sys_lib_dlsearch_path_spec) + # Double-quote double-evaled strings. + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\"" + ;; + *) + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\"" + ;; + esac + done + + cat <<__EOF__ > "${ofile}T" +#! $SHELL + +# `$echo "$ofile" | sed 's%^.*/%%'` - Provide generalized library-building support services. +# Generated automatically by $PROGRAM (GNU $PACKAGE $VERSION$TIMESTAMP) +# NOTE: Changes made to this file will be lost: look at ltmain.sh. +# +# Copyright (C) 1996-2000 Free Software Foundation, Inc. +# Originally by Gordon Matzigkeit , 1996 +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +# +# As a special exception to the GNU General Public License, if you +# distribute this file as part of a program that contains a +# configuration script generated by Autoconf, you may include it under +# the same distribution terms that you use for the rest of that program. + +# Sed that helps us avoid accidentally triggering echo(1) options like -n. +Xsed="sed -e s/^X//" + +# The HP-UX ksh and POSIX shell print the target directory to stdout +# if CDPATH is set. +if test "X\${CDPATH+set}" = Xset; then CDPATH=:; export CDPATH; fi + +# ### BEGIN LIBTOOL CONFIG + +# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: + +# Shell to use when invoking shell scripts. +SHELL=$lt_SHELL + +# Whether or not to build shared libraries. +build_libtool_libs=$enable_shared + +# Whether or not to build static libraries. +build_old_libs=$enable_static + +# Whether or not to add -lc for building shared libraries. +build_libtool_need_lc=$need_lc + +# Whether or not to optimize for fast installation. +fast_install=$enable_fast_install + +# The host system. +host_alias=$host_alias +host=$host + +# An echo program that does not interpret backslashes. +echo=$lt_echo + +# The archiver. +AR=$lt_AR +AR_FLAGS=$lt_AR_FLAGS + +# The default C compiler. +CC=$lt_CC + +# Is the compiler the GNU C compiler? +with_gcc=$GCC + +# The linker used to build libraries. +LD=$lt_LD + +# Whether we need hard or soft links. +LN_S=$lt_LN_S + +# A BSD-compatible nm program. +NM=$lt_NM + +# A symbol stripping program +STRIP=$STRIP + +# Used to examine libraries when file_magic_cmd begins "file" +MAGIC_CMD=$MAGIC_CMD + +# Used on cygwin: DLL creation program. +DLLTOOL="$DLLTOOL" + +# Used on cygwin: object dumper. +OBJDUMP="$OBJDUMP" + +# Used on cygwin: assembler. +AS="$AS" + +# The name of the directory that contains temporary libtool files. +objdir=$objdir + +# How to create reloadable object files. +reload_flag=$lt_reload_flag +reload_cmds=$lt_reload_cmds + +# How to pass a linker flag through the compiler. +wl=$lt_wl + +# Object file suffix (normally "o"). +objext="$ac_objext" + +# Old archive suffix (normally "a"). +libext="$libext" + +# Executable file suffix (normally ""). +exeext="$exeext" + +# Additional compiler flags for building library objects. +pic_flag=$lt_pic_flag +pic_mode=$pic_mode + +# Does compiler simultaneously support -c and -o options? +compiler_c_o=$lt_compiler_c_o + +# Can we write directly to a .lo ? +compiler_o_lo=$lt_compiler_o_lo + +# Must we lock files when doing compilation ? +need_locks=$lt_need_locks + +# Do we need the lib prefix for modules? +need_lib_prefix=$need_lib_prefix + +# Do we need a version for libraries? +need_version=$need_version + +# Whether dlopen is supported. +dlopen_support=$enable_dlopen + +# Whether dlopen of programs is supported. +dlopen_self=$enable_dlopen_self + +# Whether dlopen of statically linked programs is supported. +dlopen_self_static=$enable_dlopen_self_static + +# Compiler flag to prevent dynamic linking. +link_static_flag=$lt_link_static_flag + +# Compiler flag to turn off builtin functions. +no_builtin_flag=$lt_no_builtin_flag + +# Compiler flag to allow reflexive dlopens. +export_dynamic_flag_spec=$lt_export_dynamic_flag_spec + +# Compiler flag to generate shared objects directly from archives. +whole_archive_flag_spec=$lt_whole_archive_flag_spec + +# Compiler flag to generate thread-safe objects. +thread_safe_flag_spec=$lt_thread_safe_flag_spec + +# Library versioning type. +version_type=$version_type + +# Format of library name prefix. +libname_spec=$lt_libname_spec + +# List of archive names. First name is the real one, the rest are links. +# The last name is the one that the linker finds with -lNAME. +library_names_spec=$lt_library_names_spec + +# The coded name of the library, if different from the real name. +soname_spec=$lt_soname_spec + +# Commands used to build and install an old-style archive. +RANLIB=$lt_RANLIB +old_archive_cmds=$lt_old_archive_cmds +old_postinstall_cmds=$lt_old_postinstall_cmds +old_postuninstall_cmds=$lt_old_postuninstall_cmds + +# Create an old-style archive from a shared archive. +old_archive_from_new_cmds=$lt_old_archive_from_new_cmds + +# Create a temporary old-style archive to link instead of a shared archive. +old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds + +# Commands used to build and install a shared archive. +archive_cmds=$lt_archive_cmds +archive_expsym_cmds=$lt_archive_expsym_cmds +postinstall_cmds=$lt_postinstall_cmds +postuninstall_cmds=$lt_postuninstall_cmds + +# Commands to strip libraries. +old_striplib=$lt_old_striplib +striplib=$lt_striplib + +# Method to check whether dependent libraries are shared objects. +deplibs_check_method=$lt_deplibs_check_method + +# Command to use when deplibs_check_method == file_magic. +file_magic_cmd=$lt_file_magic_cmd + +# Flag that allows shared libraries with undefined symbols to be built. +allow_undefined_flag=$lt_allow_undefined_flag + +# Flag that forces no undefined symbols. +no_undefined_flag=$lt_no_undefined_flag + +# Commands used to finish a libtool library installation in a directory. +finish_cmds=$lt_finish_cmds + +# Same as above, but a single script fragment to be evaled but not shown. +finish_eval=$lt_finish_eval + +# Take the output of nm and produce a listing of raw symbols and C names. +global_symbol_pipe=$lt_global_symbol_pipe + +# Transform the output of nm in a proper C declaration +global_symbol_to_cdecl=$lt_global_symbol_to_cdecl + +# Transform the output of nm in a C name address pair +global_symbol_to_c_name_address=$lt_global_symbol_to_c_name_address + +# This is the shared library runtime path variable. +runpath_var=$runpath_var + +# This is the shared library path variable. +shlibpath_var=$shlibpath_var + +# Is shlibpath searched before the hard-coded library search path? +shlibpath_overrides_runpath=$shlibpath_overrides_runpath + +# How to hardcode a shared library path into an executable. +hardcode_action=$hardcode_action + +# Whether we should hardcode library paths into libraries. +hardcode_into_libs=$hardcode_into_libs + +# Flag to hardcode \$libdir into a binary during linking. +# This must work even if \$libdir does not exist. +hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec + +# Whether we need a single -rpath flag with a separated argument. +hardcode_libdir_separator=$lt_hardcode_libdir_separator + +# Set to yes if using DIR/libNAME.so during linking hardcodes DIR into the +# resulting binary. +hardcode_direct=$hardcode_direct + +# Set to yes if using the -LDIR flag during linking hardcodes DIR into the +# resulting binary. +hardcode_minus_L=$hardcode_minus_L + +# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into +# the resulting binary. +hardcode_shlibpath_var=$hardcode_shlibpath_var + +# Variables whose values should be saved in libtool wrapper scripts and +# restored at relink time. +variables_saved_for_relink="$variables_saved_for_relink" + +# Whether libtool must link a program against all its dependency libraries. +link_all_deplibs=$link_all_deplibs + +# Compile-time system search path for libraries +sys_lib_search_path_spec=$lt_sys_lib_search_path_spec + +# Run-time system search path for libraries +sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec + +# Fix the shell variable \$srcfile for the compiler. +fix_srcfile_path="$fix_srcfile_path" + +# Set to yes if exported symbols are required. +always_export_symbols=$always_export_symbols + +# The commands to list exported symbols. +export_symbols_cmds=$lt_export_symbols_cmds + +# The commands to extract the exported symbol list from a shared archive. +extract_expsyms_cmds=$lt_extract_expsyms_cmds + +# Symbols that should not be listed in the preloaded symbols. +exclude_expsyms=$lt_exclude_expsyms + +# Symbols that must always be exported. +include_expsyms=$lt_include_expsyms + +# ### END LIBTOOL CONFIG + +__EOF__ + + case $host_os in + aix3*) + cat <<\EOF >> "${ofile}T" + +# AIX sometimes has problems with the GCC collect2 program. For some +# reason, if we set the COLLECT_NAMES environment variable, the problems +# vanish in a puff of smoke. +if test "X${COLLECT_NAMES+set}" != Xset; then + COLLECT_NAMES= + export COLLECT_NAMES +fi +EOF + ;; + esac + + case $host_os in + cygwin* | mingw* | pw32* | os2*) + cat <<'EOF' >> "${ofile}T" + # This is a source program that is used to create dlls on Windows + # Don't remove nor modify the starting and closing comments +# /* ltdll.c starts here */ +# #define WIN32_LEAN_AND_MEAN +# #include +# #undef WIN32_LEAN_AND_MEAN +# #include +# +# #ifndef __CYGWIN__ +# # ifdef __CYGWIN32__ +# # define __CYGWIN__ __CYGWIN32__ +# # endif +# #endif +# +# #ifdef __cplusplus +# extern "C" { +# #endif +# BOOL APIENTRY DllMain (HINSTANCE hInst, DWORD reason, LPVOID reserved); +# #ifdef __cplusplus +# } +# #endif +# +# #ifdef __CYGWIN__ +# #include +# DECLARE_CYGWIN_DLL( DllMain ); +# #endif +# HINSTANCE __hDllInstance_base; +# +# BOOL APIENTRY +# DllMain (HINSTANCE hInst, DWORD reason, LPVOID reserved) +# { +# __hDllInstance_base = hInst; +# return TRUE; +# } +# /* ltdll.c ends here */ + # This is a source program that is used to create import libraries + # on Windows for dlls which lack them. Don't remove nor modify the + # starting and closing comments +# /* impgen.c starts here */ +# /* Copyright (C) 1999-2000 Free Software Foundation, Inc. +# +# This file is part of GNU libtool. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +# */ +# +# #include /* for printf() */ +# #include /* for open(), lseek(), read() */ +# #include /* for O_RDONLY, O_BINARY */ +# #include /* for strdup() */ +# +# /* O_BINARY isn't required (or even defined sometimes) under Unix */ +# #ifndef O_BINARY +# #define O_BINARY 0 +# #endif +# +# static unsigned int +# pe_get16 (fd, offset) +# int fd; +# int offset; +# { +# unsigned char b[2]; +# lseek (fd, offset, SEEK_SET); +# read (fd, b, 2); +# return b[0] + (b[1]<<8); +# } +# +# static unsigned int +# pe_get32 (fd, offset) +# int fd; +# int offset; +# { +# unsigned char b[4]; +# lseek (fd, offset, SEEK_SET); +# read (fd, b, 4); +# return b[0] + (b[1]<<8) + (b[2]<<16) + (b[3]<<24); +# } +# +# static unsigned int +# pe_as32 (ptr) +# void *ptr; +# { +# unsigned char *b = ptr; +# return b[0] + (b[1]<<8) + (b[2]<<16) + (b[3]<<24); +# } +# +# int +# main (argc, argv) +# int argc; +# char *argv[]; +# { +# int dll; +# unsigned long pe_header_offset, opthdr_ofs, num_entries, i; +# unsigned long export_rva, export_size, nsections, secptr, expptr; +# unsigned long name_rvas, nexp; +# unsigned char *expdata, *erva; +# char *filename, *dll_name; +# +# filename = argv[1]; +# +# dll = open(filename, O_RDONLY|O_BINARY); +# if (dll < 1) +# return 1; +# +# dll_name = filename; +# +# for (i=0; filename[i]; i++) +# if (filename[i] == '/' || filename[i] == '\\' || filename[i] == ':') +# dll_name = filename + i +1; +# +# pe_header_offset = pe_get32 (dll, 0x3c); +# opthdr_ofs = pe_header_offset + 4 + 20; +# num_entries = pe_get32 (dll, opthdr_ofs + 92); +# +# if (num_entries < 1) /* no exports */ +# return 1; +# +# export_rva = pe_get32 (dll, opthdr_ofs + 96); +# export_size = pe_get32 (dll, opthdr_ofs + 100); +# nsections = pe_get16 (dll, pe_header_offset + 4 +2); +# secptr = (pe_header_offset + 4 + 20 + +# pe_get16 (dll, pe_header_offset + 4 + 16)); +# +# expptr = 0; +# for (i = 0; i < nsections; i++) +# { +# char sname[8]; +# unsigned long secptr1 = secptr + 40 * i; +# unsigned long vaddr = pe_get32 (dll, secptr1 + 12); +# unsigned long vsize = pe_get32 (dll, secptr1 + 16); +# unsigned long fptr = pe_get32 (dll, secptr1 + 20); +# lseek(dll, secptr1, SEEK_SET); +# read(dll, sname, 8); +# if (vaddr <= export_rva && vaddr+vsize > export_rva) +# { +# expptr = fptr + (export_rva - vaddr); +# if (export_rva + export_size > vaddr + vsize) +# export_size = vsize - (export_rva - vaddr); +# break; +# } +# } +# +# expdata = (unsigned char*)malloc(export_size); +# lseek (dll, expptr, SEEK_SET); +# read (dll, expdata, export_size); +# erva = expdata - export_rva; +# +# nexp = pe_as32 (expdata+24); +# name_rvas = pe_as32 (expdata+32); +# +# printf ("EXPORTS\n"); +# for (i = 0; i> "${ofile}T" || (rm -f "${ofile}T"; exit 1) + + mv -f "${ofile}T" "$ofile" || \ + (rm -f "$ofile" && cp "${ofile}T" "$ofile" && rm -f "${ofile}T") + chmod +x "$ofile" +fi +## +## END FIXME + +])# _LT_AC_LTCONFIG_HACK + +# AC_LIBTOOL_DLOPEN - enable checks for dlopen support +AC_DEFUN([AC_LIBTOOL_DLOPEN], [AC_BEFORE([$0],[AC_LIBTOOL_SETUP])]) + +# AC_LIBTOOL_WIN32_DLL - declare package support for building win32 dll's +AC_DEFUN([AC_LIBTOOL_WIN32_DLL], [AC_BEFORE([$0], [AC_LIBTOOL_SETUP])]) + +# AC_ENABLE_SHARED - implement the --enable-shared flag +# Usage: AC_ENABLE_SHARED[(DEFAULT)] +# Where DEFAULT is either `yes' or `no'. If omitted, it defaults to +# `yes'. +AC_DEFUN([AC_ENABLE_SHARED], +[define([AC_ENABLE_SHARED_DEFAULT], ifelse($1, no, no, yes))dnl +AC_ARG_ENABLE(shared, +changequote(<<, >>)dnl +<< --enable-shared[=PKGS] build shared libraries [default=>>AC_ENABLE_SHARED_DEFAULT], +changequote([, ])dnl +[p=${PACKAGE-default} +case $enableval in +yes) enable_shared=yes ;; +no) enable_shared=no ;; +*) + enable_shared=no + # Look at the argument we got. We use all the common list separators. + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:," + for pkg in $enableval; do + if test "X$pkg" = "X$p"; then + enable_shared=yes + fi + done + IFS="$ac_save_ifs" + ;; +esac], +enable_shared=AC_ENABLE_SHARED_DEFAULT)dnl +]) + +# AC_DISABLE_SHARED - set the default shared flag to --disable-shared +AC_DEFUN([AC_DISABLE_SHARED], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl +AC_ENABLE_SHARED(no)]) + +# AC_ENABLE_STATIC - implement the --enable-static flag +# Usage: AC_ENABLE_STATIC[(DEFAULT)] +# Where DEFAULT is either `yes' or `no'. If omitted, it defaults to +# `yes'. +AC_DEFUN([AC_ENABLE_STATIC], +[define([AC_ENABLE_STATIC_DEFAULT], ifelse($1, no, no, yes))dnl +AC_ARG_ENABLE(static, +changequote(<<, >>)dnl +<< --enable-static[=PKGS] build static libraries [default=>>AC_ENABLE_STATIC_DEFAULT], +changequote([, ])dnl +[p=${PACKAGE-default} +case $enableval in +yes) enable_static=yes ;; +no) enable_static=no ;; +*) + enable_static=no + # Look at the argument we got. We use all the common list separators. + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:," + for pkg in $enableval; do + if test "X$pkg" = "X$p"; then + enable_static=yes + fi + done + IFS="$ac_save_ifs" + ;; +esac], +enable_static=AC_ENABLE_STATIC_DEFAULT)dnl +]) + +# AC_DISABLE_STATIC - set the default static flag to --disable-static +AC_DEFUN([AC_DISABLE_STATIC], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl +AC_ENABLE_STATIC(no)]) + + +# AC_ENABLE_FAST_INSTALL - implement the --enable-fast-install flag +# Usage: AC_ENABLE_FAST_INSTALL[(DEFAULT)] +# Where DEFAULT is either `yes' or `no'. If omitted, it defaults to +# `yes'. +AC_DEFUN([AC_ENABLE_FAST_INSTALL], +[define([AC_ENABLE_FAST_INSTALL_DEFAULT], ifelse($1, no, no, yes))dnl +AC_ARG_ENABLE(fast-install, +changequote(<<, >>)dnl +<< --enable-fast-install[=PKGS] optimize for fast installation [default=>>AC_ENABLE_FAST_INSTALL_DEFAULT], +changequote([, ])dnl +[p=${PACKAGE-default} +case $enableval in +yes) enable_fast_install=yes ;; +no) enable_fast_install=no ;; +*) + enable_fast_install=no + # Look at the argument we got. We use all the common list separators. + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:," + for pkg in $enableval; do + if test "X$pkg" = "X$p"; then + enable_fast_install=yes + fi + done + IFS="$ac_save_ifs" + ;; +esac], +enable_fast_install=AC_ENABLE_FAST_INSTALL_DEFAULT)dnl +]) + +# AC_DISABLE_FAST_INSTALL - set the default to --disable-fast-install +AC_DEFUN([AC_DISABLE_FAST_INSTALL], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl +AC_ENABLE_FAST_INSTALL(no)]) + +# AC_LIBTOOL_PICMODE - implement the --with-pic flag +# Usage: AC_LIBTOOL_PICMODE[(MODE)] +# Where MODE is either `yes' or `no'. If omitted, it defaults to +# `both'. +AC_DEFUN([AC_LIBTOOL_PICMODE], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl +pic_mode=ifelse($#,1,$1,default)]) + + +# AC_PATH_TOOL_PREFIX - find a file program which can recognise shared library +AC_DEFUN([AC_PATH_TOOL_PREFIX], +[AC_MSG_CHECKING([for $1]) +AC_CACHE_VAL(lt_cv_path_MAGIC_CMD, +[case $MAGIC_CMD in + /*) + lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. + ;; + ?:/*) + lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a dos path. + ;; + *) + ac_save_MAGIC_CMD="$MAGIC_CMD" + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" +dnl $ac_dummy forces splitting on constant user-supplied paths. +dnl POSIX.2 word splitting is done only on the output of word expansions, +dnl not every word. This closes a longstanding sh security hole. + ac_dummy="ifelse([$2], , $PATH, [$2])" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$1; then + lt_cv_path_MAGIC_CMD="$ac_dir/$1" + if test -n "$file_magic_test_file"; then + case $deplibs_check_method in + "file_magic "*) + file_magic_regex="`expr \"$deplibs_check_method\" : \"file_magic \(.*\)\"`" + MAGIC_CMD="$lt_cv_path_MAGIC_CMD" + if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | + egrep "$file_magic_regex" > /dev/null; then + : + else + cat <&2 + +*** Warning: the command libtool uses to detect shared libraries, +*** $file_magic_cmd, produces output that libtool cannot recognize. +*** The result is that libtool may fail to recognize shared libraries +*** as such. This will affect the creation of libtool libraries that +*** depend on shared libraries, but programs linked with such libtool +*** libraries will work regardless of this problem. Nevertheless, you +*** may want to report the problem to your system manager and/or to +*** bug-libtool@gnu.org + +EOF + fi ;; + esac + fi + break + fi + done + IFS="$ac_save_ifs" + MAGIC_CMD="$ac_save_MAGIC_CMD" + ;; +esac]) +MAGIC_CMD="$lt_cv_path_MAGIC_CMD" +if test -n "$MAGIC_CMD"; then + AC_MSG_RESULT($MAGIC_CMD) +else + AC_MSG_RESULT(no) +fi +]) + + +# AC_PATH_MAGIC - find a file program which can recognise a shared library +AC_DEFUN([AC_PATH_MAGIC], +[AC_REQUIRE([AC_CHECK_TOOL_PREFIX])dnl +AC_PATH_TOOL_PREFIX(${ac_tool_prefix}file, /usr/bin:$PATH) +if test -z "$lt_cv_path_MAGIC_CMD"; then + if test -n "$ac_tool_prefix"; then + AC_PATH_TOOL_PREFIX(file, /usr/bin:$PATH) + else + MAGIC_CMD=: + fi +fi +]) + + +# AC_PROG_LD - find the path to the GNU or non-GNU linker +AC_DEFUN([AC_PROG_LD], +[AC_ARG_WITH(gnu-ld, +[ --with-gnu-ld assume the C compiler uses GNU ld [default=no]], +test "$withval" = no || with_gnu_ld=yes, with_gnu_ld=no) +AC_REQUIRE([AC_PROG_CC])dnl +AC_REQUIRE([AC_CANONICAL_HOST])dnl +AC_REQUIRE([AC_CANONICAL_BUILD])dnl +AC_REQUIRE([_LT_AC_LIBTOOL_SYS_PATH_SEPARATOR])dnl +ac_prog=ld +if test "$GCC" = yes; then + # Check if gcc -print-prog-name=ld gives a path. + AC_MSG_CHECKING([for ld used by GCC]) + case $host in + *-*-mingw*) + # gcc leaves a trailing carriage return which upsets mingw + ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; + *) + ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; + esac + case $ac_prog in + # Accept absolute paths. + [[\\/]]* | [[A-Za-z]]:[[\\/]]*) + re_direlt='/[[^/]][[^/]]*/\.\./' + # Canonicalize the path of ld + ac_prog=`echo $ac_prog| sed 's%\\\\%/%g'` + while echo $ac_prog | grep "$re_direlt" > /dev/null 2>&1; do + ac_prog=`echo $ac_prog| sed "s%$re_direlt%/%"` + done + test -z "$LD" && LD="$ac_prog" + ;; + "") + # If it fails, then pretend we aren't using GCC. + ac_prog=ld + ;; + *) + # If it is relative, then search for the first ld in PATH. + with_gnu_ld=unknown + ;; + esac +elif test "$with_gnu_ld" = yes; then + AC_MSG_CHECKING([for GNU ld]) +else + AC_MSG_CHECKING([for non-GNU ld]) +fi +AC_CACHE_VAL(lt_cv_path_LD, +[if test -z "$LD"; then + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for ac_dir in $PATH; do + test -z "$ac_dir" && ac_dir=. + if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then + lt_cv_path_LD="$ac_dir/$ac_prog" + # Check to see if the program is GNU ld. I'd rather use --version, + # but apparently some GNU ld's only accept -v. + # Break only if it was the GNU/non-GNU ld that we prefer. + if "$lt_cv_path_LD" -v 2>&1 < /dev/null | egrep '(GNU|with BFD)' > /dev/null; then + test "$with_gnu_ld" != no && break + else + test "$with_gnu_ld" != yes && break + fi + fi + done + IFS="$ac_save_ifs" +else + lt_cv_path_LD="$LD" # Let the user override the test with a path. +fi]) +LD="$lt_cv_path_LD" +if test -n "$LD"; then + AC_MSG_RESULT($LD) +else + AC_MSG_RESULT(no) +fi +test -z "$LD" && AC_MSG_ERROR([no acceptable ld found in \$PATH]) +AC_PROG_LD_GNU +]) + +# AC_PROG_LD_GNU - +AC_DEFUN([AC_PROG_LD_GNU], +[AC_CACHE_CHECK([if the linker ($LD) is GNU ld], lt_cv_prog_gnu_ld, +[# I'd rather use --version here, but apparently some GNU ld's only accept -v. +if $LD -v 2>&1 &5; then + lt_cv_prog_gnu_ld=yes +else + lt_cv_prog_gnu_ld=no +fi]) +with_gnu_ld=$lt_cv_prog_gnu_ld +]) + +# AC_PROG_LD_RELOAD_FLAG - find reload flag for linker +# -- PORTME Some linkers may need a different reload flag. +AC_DEFUN([AC_PROG_LD_RELOAD_FLAG], +[AC_CACHE_CHECK([for $LD option to reload object files], lt_cv_ld_reload_flag, +[lt_cv_ld_reload_flag='-r']) +reload_flag=$lt_cv_ld_reload_flag +test -n "$reload_flag" && reload_flag=" $reload_flag" +]) + +# AC_DEPLIBS_CHECK_METHOD - how to check for library dependencies +# -- PORTME fill in with the dynamic library characteristics +AC_DEFUN([AC_DEPLIBS_CHECK_METHOD], +[AC_CACHE_CHECK([how to recognise dependant libraries], +lt_cv_deplibs_check_method, +[lt_cv_file_magic_cmd='$MAGIC_CMD' +lt_cv_file_magic_test_file= +lt_cv_deplibs_check_method='unknown' +# Need to set the preceding variable on all platforms that support +# interlibrary dependencies. +# 'none' -- dependencies not supported. +# `unknown' -- same as none, but documents that we really don't know. +# 'pass_all' -- all dependencies passed with no checks. +# 'test_compile' -- check by making test program. +# 'file_magic [[regex]]' -- check by looking for files in library path +# which responds to the $file_magic_cmd with a given egrep regex. +# If you have `file' or equivalent on your system and you're not sure +# whether `pass_all' will *always* work, you probably want this one. + +case $host_os in +aix4* | aix5*) + lt_cv_deplibs_check_method=pass_all + ;; + +beos*) + lt_cv_deplibs_check_method=pass_all + ;; + +bsdi4*) + lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib)' + lt_cv_file_magic_cmd='/usr/bin/file -L' + lt_cv_file_magic_test_file=/shlib/libc.so + ;; + +cygwin* | mingw* | pw32*) + lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?' + lt_cv_file_magic_cmd='$OBJDUMP -f' + ;; + +darwin* | rhapsody*) + lt_cv_deplibs_check_method='file_magic Mach-O dynamically linked shared library' + lt_cv_file_magic_cmd='/usr/bin/file -L' + case "$host_os" in + rhapsody* | darwin1.[[012]]) + lt_cv_file_magic_test_file=`echo /System/Library/Frameworks/System.framework/Versions/*/System | head -1` + ;; + *) # Darwin 1.3 on + lt_cv_file_magic_test_file='/usr/lib/libSystem.dylib' + ;; + esac + ;; + +freebsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then + case $host_cpu in + i*86 ) + # Not sure whether the presence of OpenBSD here was a mistake. + # Let's accept both of them until this is cleared up. + lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD)/i[[3-9]]86 (compact )?demand paged shared library' + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` + ;; + esac + else + lt_cv_deplibs_check_method=pass_all + fi + ;; + +gnu*) + lt_cv_deplibs_check_method=pass_all + ;; + +hpux10.20*|hpux11*) + lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|PA-RISC[[0-9]].[[0-9]]) shared library' + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=/usr/lib/libc.sl + ;; + +irix5* | irix6*) + case $host_os in + irix5*) + # this will be overridden with pass_all, but let us keep it just in case + lt_cv_deplibs_check_method="file_magic ELF 32-bit MSB dynamic lib MIPS - version 1" + ;; + *) + case $LD in + *-32|*"-32 ") libmagic=32-bit;; + *-n32|*"-n32 ") libmagic=N32;; + *-64|*"-64 ") libmagic=64-bit;; + *) libmagic=never-match;; + esac + # this will be overridden with pass_all, but let us keep it just in case + lt_cv_deplibs_check_method="file_magic ELF ${libmagic} MSB mips-[[1234]] dynamic lib MIPS - version 1" + ;; + esac + lt_cv_file_magic_test_file=`echo /lib${libsuff}/libc.so*` + lt_cv_deplibs_check_method=pass_all + ;; + +# This must be Linux ELF. +linux-gnu*) + case $host_cpu in + alpha* | hppa* | i*86 | powerpc* | sparc* | ia64* ) + lt_cv_deplibs_check_method=pass_all ;; + *) + # glibc up to 2.1.1 does not perform some relocations on ARM + lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB (shared object|dynamic lib )' ;; + esac + lt_cv_file_magic_test_file=`echo /lib/libc.so* /lib/libc-*.so` + ;; + +netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then + lt_cv_deplibs_check_method='match_pattern /lib[[^/\.]]+\.so\.[[0-9]]+\.[[0-9]]+$' + else + lt_cv_deplibs_check_method='match_pattern /lib[[^/\.]]+\.so$' + fi + ;; + +newos6*) + lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (executable|dynamic lib)' + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=/usr/lib/libnls.so + ;; + +openbsd*) + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB shared object' + else + lt_cv_deplibs_check_method='file_magic OpenBSD.* shared library' + fi + ;; + +osf3* | osf4* | osf5*) + # this will be overridden with pass_all, but let us keep it just in case + lt_cv_deplibs_check_method='file_magic COFF format alpha shared library' + lt_cv_file_magic_test_file=/shlib/libc.so + lt_cv_deplibs_check_method=pass_all + ;; + +sco3.2v5*) + lt_cv_deplibs_check_method=pass_all + ;; + +solaris*) + lt_cv_deplibs_check_method=pass_all + lt_cv_file_magic_test_file=/lib/libc.so + ;; + +sysv5uw[[78]]* | sysv4*uw2*) + lt_cv_deplibs_check_method=pass_all + ;; + +sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) + case $host_vendor in + motorola) + lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib) M[[0-9]][[0-9]]* Version [[0-9]]' + lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` + ;; + ncr) + lt_cv_deplibs_check_method=pass_all + ;; + sequent) + lt_cv_file_magic_cmd='/bin/file' + lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB (shared object|dynamic lib )' + ;; + sni) + lt_cv_file_magic_cmd='/bin/file' + lt_cv_deplibs_check_method="file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB dynamic lib" + lt_cv_file_magic_test_file=/lib/libc.so + ;; + esac + ;; +esac +]) +file_magic_cmd=$lt_cv_file_magic_cmd +deplibs_check_method=$lt_cv_deplibs_check_method +]) + + +# AC_PROG_NM - find the path to a BSD-compatible name lister +AC_DEFUN([AC_PROG_NM], +[AC_REQUIRE([_LT_AC_LIBTOOL_SYS_PATH_SEPARATOR])dnl +AC_MSG_CHECKING([for BSD-compatible nm]) +AC_CACHE_VAL(lt_cv_path_NM, +[if test -n "$NM"; then + # Let the user override the test. + lt_cv_path_NM="$NM" +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for ac_dir in $PATH /usr/ccs/bin /usr/ucb /bin; do + test -z "$ac_dir" && ac_dir=. + tmp_nm=$ac_dir/${ac_tool_prefix}nm + if test -f $tmp_nm || test -f $tmp_nm$ac_exeext ; then + # Check to see if the nm accepts a BSD-compat flag. + # Adding the `sed 1q' prevents false positives on HP-UX, which says: + # nm: unknown option "B" ignored + # Tru64's nm complains that /dev/null is an invalid object file + if ($tmp_nm -B /dev/null 2>&1 | sed '1q'; exit 0) | egrep '(/dev/null|Invalid file or object type)' >/dev/null; then + lt_cv_path_NM="$tmp_nm -B" + break + elif ($tmp_nm -p /dev/null 2>&1 | sed '1q'; exit 0) | egrep /dev/null >/dev/null; then + lt_cv_path_NM="$tmp_nm -p" + break + else + lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but + continue # so that we can try to find one that supports BSD flags + fi + fi + done + IFS="$ac_save_ifs" + test -z "$lt_cv_path_NM" && lt_cv_path_NM=nm +fi]) +NM="$lt_cv_path_NM" +AC_MSG_RESULT([$NM]) +]) + +# AC_CHECK_LIBM - check for math library +AC_DEFUN([AC_CHECK_LIBM], +[AC_REQUIRE([AC_CANONICAL_HOST])dnl +LIBM= +case $host in +*-*-beos* | *-*-cygwin* | *-*-pw32*) + # These system don't have libm + ;; +*-ncr-sysv4.3*) + AC_CHECK_LIB(mw, _mwvalidcheckl, LIBM="-lmw") + AC_CHECK_LIB(m, main, LIBM="$LIBM -lm") + ;; +*) + AC_CHECK_LIB(m, main, LIBM="-lm") + ;; +esac +]) + +# AC_LIBLTDL_CONVENIENCE[(dir)] - sets LIBLTDL to the link flags for +# the libltdl convenience library and INCLTDL to the include flags for +# the libltdl header and adds --enable-ltdl-convenience to the +# configure arguments. Note that LIBLTDL and INCLTDL are not +# AC_SUBSTed, nor is AC_CONFIG_SUBDIRS called. If DIR is not +# provided, it is assumed to be `libltdl'. LIBLTDL will be prefixed +# with '${top_builddir}/' and INCLTDL will be prefixed with +# '${top_srcdir}/' (note the single quotes!). If your package is not +# flat and you're not using automake, define top_builddir and +# top_srcdir appropriately in the Makefiles. +AC_DEFUN([AC_LIBLTDL_CONVENIENCE], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl + case $enable_ltdl_convenience in + no) AC_MSG_ERROR([this package needs a convenience libltdl]) ;; + "") enable_ltdl_convenience=yes + ac_configure_args="$ac_configure_args --enable-ltdl-convenience" ;; + esac + LIBLTDL='${top_builddir}/'ifelse($#,1,[$1],['libltdl'])/libltdlc.la + INCLTDL='-I${top_srcdir}/'ifelse($#,1,[$1],['libltdl']) +]) + +# AC_LIBLTDL_INSTALLABLE[(dir)] - sets LIBLTDL to the link flags for +# the libltdl installable library and INCLTDL to the include flags for +# the libltdl header and adds --enable-ltdl-install to the configure +# arguments. Note that LIBLTDL and INCLTDL are not AC_SUBSTed, nor is +# AC_CONFIG_SUBDIRS called. If DIR is not provided and an installed +# libltdl is not found, it is assumed to be `libltdl'. LIBLTDL will +# be prefixed with '${top_builddir}/' and INCLTDL will be prefixed +# with '${top_srcdir}/' (note the single quotes!). If your package is +# not flat and you're not using automake, define top_builddir and +# top_srcdir appropriately in the Makefiles. +# In the future, this macro may have to be called after AC_PROG_LIBTOOL. +AC_DEFUN([AC_LIBLTDL_INSTALLABLE], +[AC_BEFORE([$0],[AC_LIBTOOL_SETUP])dnl + AC_CHECK_LIB(ltdl, main, + [test x"$enable_ltdl_install" != xyes && enable_ltdl_install=no], + [if test x"$enable_ltdl_install" = xno; then + AC_MSG_WARN([libltdl not installed, but installation disabled]) + else + enable_ltdl_install=yes + fi + ]) + if test x"$enable_ltdl_install" = x"yes"; then + ac_configure_args="$ac_configure_args --enable-ltdl-install" + LIBLTDL='${top_builddir}/'ifelse($#,1,[$1],['libltdl'])/libltdl.la + INCLTDL='-I${top_srcdir}/'ifelse($#,1,[$1],['libltdl']) + else + ac_configure_args="$ac_configure_args --enable-ltdl-install=no" + LIBLTDL="-lltdl" + INCLTDL= + fi +]) + +# old names +AC_DEFUN([AM_PROG_LIBTOOL], [AC_PROG_LIBTOOL]) +AC_DEFUN([AM_ENABLE_SHARED], [AC_ENABLE_SHARED($@)]) +AC_DEFUN([AM_ENABLE_STATIC], [AC_ENABLE_STATIC($@)]) +AC_DEFUN([AM_DISABLE_SHARED], [AC_DISABLE_SHARED($@)]) +AC_DEFUN([AM_DISABLE_STATIC], [AC_DISABLE_STATIC($@)]) +AC_DEFUN([AM_PROG_LD], [AC_PROG_LD]) +AC_DEFUN([AM_PROG_NM], [AC_PROG_NM]) + +# This is just to silence aclocal about the macro not being used +ifelse([AC_DISABLE_FAST_INSTALL]) diff --git a/authprogs/Makefile b/authprogs/Makefile new file mode 100644 index 0000000..efb2751 --- /dev/null +++ b/authprogs/Makefile @@ -0,0 +1,115 @@ +## $Id: Makefile 7727 2008-04-06 07:59:46Z iulius $ + +include ../Makefile.global + +top = .. +CFLAGS = $(GCFLAGS) + +ALL = auth_smb ckpasswd domain ident radius $(KRB5_AUTH) + +LIBSMB = smbval/smbvalid.a + +LIBAUTH = libauth.o + +SOURCES = auth_krb5.c auth_smb.c ckpasswd.c domain.c ident.c libauth.c \ + radius.c + +all: $(ALL) + +warnings: + $(MAKE) COPT='$(WARNINGS)' all + +install: all + if [ x"$(KRB5_AUTH)" != x ] ; then \ + $(LI_XPUB) auth_krb5 $(D)$(PATHAUTHPASSWD)/auth_krb5 ; \ + fi + for F in auth_smb ckpasswd radius ; do \ + $(LI_XPUB) $$F $D$(PATHAUTHPASSWD)/$$F ; \ + done + for F in domain ident ; do \ + $(LI_XPUB) $$F $D$(PATHAUTHRESOLV)/$$F ; \ + done + +clobber clean distclean: + rm -f *.o $(ALL) + rm -rf .libs + cd smbval && $(MAKE) clean + +tags ctags: $(SOURCES) + $(CTAGS) $(SOURCES) ../lib/*.c ../include/*.h + +profiled: + $(MAKEPROFILING) all + + +## Compilation rules. + +LINK = $(LIBLD) $(LDFLAGS) -o $@ +CKLIBS = $(CRYPTLIB) $(SHADOWLIB) $(PAMLIB) $(DBMLIB) + +auth_krb5: auth_krb5.o $(LIBAUTH) $(LIBINN) + $(LINK) auth_krb5.o $(LIBAUTH) $(KRB5LIB) $(LIBINN) $(LIBS) + +auth_smb: auth_smb.o $(LIBSMB) $(LIBAUTH) $(LIBINN) + $(LINK) auth_smb.o $(LIBSMB) $(LIBAUTH) $(LIBINN) $(LIBS) + +ckpasswd: ckpasswd.o $(LIBAUTH) $(LIBINN) + $(LINK) ckpasswd.o $(LIBAUTH) $(CKLIBS) $(LIBINN) $(LIBS) + +domain: domain.o $(LIBAUTH) $(LIBINN) + $(LINK) domain.o $(LIBAUTH) $(LIBINN) $(LIBS) + +ident: ident.o $(LIBAUTH) $(LIBINN) + $(LINK) ident.o $(LIBAUTH) $(LIBINN) $(LIBS) + +radius: radius.o $(LIBAUTH) $(LIBINN) + $(LINK) radius.o $(LIBAUTH) $(LIBINN) $(LIBS) + +auth_krb5.o: auth_krb5.c + $(CC) $(CFLAGS) $(KRB5INC) -c auth_krb5.c + +ckpasswd.o: ckpasswd.c + $(CC) $(CFLAGS) $(DBMINC) -c ckpasswd.c + +$(LIBINN): ; (cd ../lib ; $(MAKE)) +$(LIBSMB): ; (cd smbval ; $(MAKE)) +$(LIBAUTH): libauth.h libauth.c + + +## Dependencies. Default list, below, is probably good enough. + +depend: Makefile $(SOURCES) + $(MAKEDEPEND) '$(CFLAGS)' $(SOURCES) + +# DO NOT DELETE THIS LINE -- make depend depends on it. +auth_krb5.o: auth_krb5.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + libauth.h ../include/portable/socket.h ../include/config.h \ + ../include/inn/messages.h ../include/inn/defines.h ../include/libinn.h +auth_smb.o: auth_smb.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/messages.h ../include/inn/defines.h libauth.h \ + ../include/portable/socket.h ../include/config.h smbval/valid.h +ckpasswd.o: ckpasswd.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/messages.h ../include/inn/defines.h ../include/inn/qio.h \ + ../include/inn/vector.h ../include/libinn.h libauth.h \ + ../include/portable/socket.h ../include/config.h +domain.o: domain.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/messages.h ../include/inn/defines.h ../include/libinn.h \ + libauth.h ../include/portable/socket.h ../include/config.h +ident.o: ident.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/messages.h ../include/inn/defines.h ../include/libinn.h \ + libauth.h ../include/portable/socket.h ../include/config.h +libauth.o: libauth.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/libinn.h libauth.h ../include/portable/socket.h \ + ../include/config.h ../include/inn/messages.h ../include/inn/defines.h +radius.o: radius.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/portable/time.h ../include/config.h ../include/inn/innconf.h \ + ../include/inn/defines.h ../include/inn/md5.h ../include/inn/messages.h \ + ../include/libinn.h ../include/nntp.h ../include/paths.h \ + ../include/conffile.h libauth.h ../include/portable/socket.h diff --git a/authprogs/auth_krb5.c b/authprogs/auth_krb5.c new file mode 100644 index 0000000..1088b8b --- /dev/null +++ b/authprogs/auth_krb5.c @@ -0,0 +1,217 @@ +/* $Id: auth_krb5.c 7462 2005-12-12 01:06:54Z eagle $ +** +** Check an username and password against Kerberos v5. +** +** Based on nnrpkrb5auth by Christopher P. Lindsey +** See +** +** This program takes a username and password pair from nnrpd and checks +** checks their validity against a Kerberos v5 KDC by attempting to obtain a +** TGT. With the -i command line option, appends / to +** the username prior to authentication. +** +** Special thanks to Von Welch for giving me the initial +** code on which the Kerberos V authentication is based many years ago, and +** for introducing me to Kerberos back in '96. +** +** Also, thanks to Graeme Mathieson for his inspiration +** through the pamckpasswd program. +*/ + +#include "config.h" +#include "clibrary.h" +#include "libauth.h" +#ifdef HAVE_ET_COM_ERR_H +# include +#else +# include +#endif + +/* krb5_get_in_tkt_with_password is deprecated. */ +#define KRB5_DEPRECATED 1 +#include + +#include "inn/messages.h" +#include "libinn.h" + +/* + * Default life of the ticket we are getting. Since we are just checking + * to see if the user can get one, it doesn't need a long lifetime. + */ +#define KRB5_DEFAULT_LIFE 60 * 5 /* 5 minutes */ + + +/* +** Check the username and password by attempting to get a TGT. Returns 1 on +** success and 0 on failure. Errors are reported via com_err. +*/ +static int +krb5_check_password (char *principal_name, char *password) +{ + krb5_context kcontext; + krb5_creds creds; + krb5_principal user_principal; + krb5_data *user_realm; + krb5_principal service_principal; + krb5_timestamp now; + krb5_address **addrs = (krb5_address **) NULL; /* Use default */ + long lifetime = KRB5_DEFAULT_LIFE; + int options = 0; + + /* TGT service name for convenience */ + krb5_data tgtname = { 0, KRB5_TGS_NAME_SIZE, KRB5_TGS_NAME }; + + krb5_preauthtype *preauth = NULL; + + krb5_error_code code; + + /* Our return code - 1 is success */ + int result = 0; + + /* Initialize our Kerberos state */ + code = krb5_init_context (&kcontext); + if (code) { + com_err (message_program_name, code, "initializing krb5 context"); + return 0; + } + +#ifdef HAVE_KRB5_INIT_ETS + /* Initialize krb5 error tables */ + krb5_init_ets (kcontext); +#endif + + /* Get current time */ + code = krb5_timeofday (kcontext, &now); + if (code) { + com_err (message_program_name, code, "getting time of day"); + return 0; + } + + /* Set up credentials to be filled in */ + memset (&creds, 0, sizeof(creds)); + + /* From here on, goto cleanup to exit */ + + /* Parse the username into a krb5 principal */ + if (!principal_name) { + com_err (message_program_name, 0, "passed NULL principal name"); + goto cleanup; + } + + code = krb5_parse_name (kcontext, principal_name, &user_principal); + if (code) { + com_err (message_program_name, code, + "parsing user principal name %.100s", principal_name); + goto cleanup; + } + + creds.client = user_principal; + + /* Get the user's realm for building service principal */ + user_realm = krb5_princ_realm (kcontext, user_principal); + + /* + * Build the service name into a principal. Right now this is + * a TGT for the user's realm. + */ + code = krb5_build_principal_ext (kcontext, + &service_principal, + user_realm->length, + user_realm->data, + tgtname.length, + tgtname.data, + user_realm->length, + user_realm->data, + 0 /* terminator */); + if (code) { + com_err(message_program_name, code, "building service principal name"); + goto cleanup; + } + + creds.server = service_principal; + + creds.times.starttime = 0; /* Now */ + creds.times.endtime = now + lifetime; + creds.times.renew_till = 0; /* Unrenewable */ + + /* DO IT */ + code = krb5_get_in_tkt_with_password (kcontext, + options, + addrs, + NULL, + preauth, + password, + 0, + &creds, + 0); + + /* We are done with password at this point... */ + + if (code) { + /* FAILURE - Parse a few common errors here */ + switch (code) { + case KRB5KRB_AP_ERR_BAD_INTEGRITY: + com_err (message_program_name, 0, "bad password for %.100s", + principal_name); + break; + case KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN: + com_err (message_program_name, 0, "unknown user \"%.100s\"", + principal_name); + break; + default: + com_err (message_program_name, code, + "checking Kerberos password for %.100s", principal_name); + } + result = 0; + } else { + /* SUCCESS */ + result = 1; + } + + /* Cleanup */ + cleanup: + krb5_free_cred_contents (kcontext, &creds); + + return result; +} + +int +main (int argc, char *argv[]) +{ + struct auth_info *authinfo; + char *new_user; + + message_program_name = "auth_krb5"; + + /* Retrieve the username and passwd from nnrpd. */ + authinfo = get_auth_info(stdin); + + /* Must have a username/password, and no '@' in the address. @ checking + is there to prevent authentication against another Kerberos realm; there + should be a -r commandline option to make this check unnecessary + in the future. */ + if (authinfo == NULL) + die("no authentication information from nnrpd"); + if (authinfo->username[0] == '\0') + die("null username"); + if (strchr(authinfo->username, '@') != NULL) + die("username contains @, not allowed"); + + /* May need to prepend instance name if -i option was given. */ + if (argc > 1) { + if (argc == 3 && strcmp(argv[1], "-i") == 0) { + new_user = concat(authinfo->username, "/", argv[2], (char *) 0); + free(authinfo->username); + authinfo->username = new_user; + } else { + die("error parsing command-line options"); + } + } + + if (krb5_check_password(authinfo->username, authinfo->password)) { + printf("User:%s\r\n", authinfo->username); + exit(0); + } else { + die("failure validating password"); + } +} diff --git a/authprogs/auth_smb.c b/authprogs/auth_smb.c new file mode 100644 index 0000000..34fcc55 --- /dev/null +++ b/authprogs/auth_smb.c @@ -0,0 +1,66 @@ +/* + * Samba authenticator. + * usage: auth_smb [] + * + * Heavily based on: + * pam_smb -- David Airlie 1998-2000 v1.1.6 + * http://www.csn.ul.ie/~airlied + * + * Written 2000 October by Krischan Jodies + * + */ + +#include "config.h" +#include "clibrary.h" +#include "inn/messages.h" + +#include "libauth.h" +#include "smbval/valid.h" + +int +main(int argc, char *argv[]) +{ + struct auth_info *authinfo; + int result; + char *server, *backup, *domain; + + message_program_name = "auth_smb"; + + if ((argc > 4) || (argc < 3)) + die("wrong number of arguments" + " (auth_smb [] "); + + authinfo = get_auth_info(stdin); + if (authinfo == NULL) + die("no user information provided by nnrpd"); + + /* Got a username and password. Now check to see if they're valid. */ + server = argv[1]; + backup = (argc > 3) ? argv[2] : argv[1]; + domain = (argc > 3) ? argv[3] : argv[2]; + result = Valid_User(authinfo->username, authinfo->password, server, + backup, domain); + + /* Analyze the result. */ + switch (result) { + case NTV_NO_ERROR: + printf("User:%s\n", authinfo->username); + exit(0); + break; + case NTV_SERVER_ERROR: + die("server error"); + break; + case NTV_PROTOCOL_ERROR: + die("protocol error"); + break; + case NTV_LOGON_ERROR: + die("logon error"); + break; + default: + die("unknown error"); + break; + } + + /* Never reached. */ + return 1; +} diff --git a/authprogs/ckpasswd.c b/authprogs/ckpasswd.c new file mode 100644 index 0000000..e8f1db1 --- /dev/null +++ b/authprogs/ckpasswd.c @@ -0,0 +1,411 @@ +/* $Id: ckpasswd.c 7565 2006-08-28 02:42:54Z eagle $ +** +** The default username/password authenticator. +** +** This program is intended to be run by nnrpd and handle usernames and +** passwords. It can authenticate against a regular flat file (the type +** managed by htpasswd), a DBM file, the system password file or shadow file, +** or PAM. +*/ + +#include "config.h" +#include "clibrary.h" + +#include "inn/messages.h" +#include "inn/qio.h" +#include "inn/vector.h" +#include "libinn.h" + +#include "libauth.h" + +#if HAVE_CRYPT_H +# include +#endif +#include +#include +#include + +#if defined(HAVE_DBM) || defined(HAVE_BDB_DBM) +# if HAVE_NDBM_H +# include +# elif HAVE_BDB_DBM +# define DB_DBM_HSEARCH 1 +# include +# elif HAVE_GDBM_NDBM_H +# include +# elif HAVE_DB1_NDBM_H +# include +# endif +# define OPT_DBM "d:" +#else +# define OPT_DBM "" +#endif + +#if HAVE_GETSPNAM +# include +# define OPT_SHADOW "s" +#else +# define OPT_SHADOW "" +#endif + +#if HAVE_PAM +# if HAVE_PAM_PAM_APPL_H +# include +# else +# include +# endif +#endif + + +/* +** The PAM conversation function. +** +** Since we already have all the information and can't ask the user +** questions, we can't quite follow the real PAM protocol. Instead, we just +** return the password in response to every question that PAM asks. There +** appears to be no generic way to determine whether the message in question +** is indeed asking for the password.... +** +** This function allocates an array of struct pam_response to return to the +** PAM libraries that's never freed. For this program, this isn't much of an +** issue, since it will likely only be called once and then the program will +** exit. This function uses malloc and strdup instead of xmalloc and xstrdup +** intentionally so that the PAM conversation will be closed cleanly if we +** run out of memory rather than simply terminated. +** +** appdata_ptr contains the password we were given. +*/ +#if HAVE_PAM +static int +pass_conv(int num_msg, const struct pam_message **msgm UNUSED, + struct pam_response **response, void *appdata_ptr) +{ + int i; + + *response = malloc(num_msg * sizeof(struct pam_response)); + if (*response == NULL) + return PAM_CONV_ERR; + for (i = 0; i < num_msg; i++) { + (*response)[i].resp = strdup((char *)appdata_ptr); + (*response)[i].resp_retcode = 0; + } + return PAM_SUCCESS; +} +#endif /* HAVE_PAM */ + + +/* +** Authenticate a user via PAM. +** +** Attempts to authenticate a user with PAM, returning true if the user +** successfully authenticates and false otherwise. Note that this function +** doesn't attempt to handle any remapping of the authenticated user by the +** PAM stack, but just assumes that the authenticated user was the same as +** the username given. +** +** Right now, all failures are handled via die. This may be worth revisiting +** in case we want to try other authentication methods if this fails for a +** reason other than the system not having PAM support. +*/ +#if !HAVE_PAM +static bool +auth_pam(char *username UNUSED, char *password UNUSED) +{ + return false; +} +#else +static bool +auth_pam(const char *username, char *password) +{ + pam_handle_t *pamh; + struct pam_conv conv; + int status; + + conv.conv = pass_conv; + conv.appdata_ptr = password; + status = pam_start("nnrpd", username, &conv, &pamh); + if (status != PAM_SUCCESS) + die("pam_start failed: %s", pam_strerror(pamh, status)); + status = pam_authenticate(pamh, PAM_SILENT); + if (status != PAM_SUCCESS) + die("pam_authenticate failed: %s", pam_strerror(pamh, status)); + status = pam_acct_mgmt(pamh, PAM_SILENT); + if (status != PAM_SUCCESS) + die("pam_acct_mgmt failed: %s", pam_strerror(pamh, status)); + status = pam_end(pamh, status); + if (status != PAM_SUCCESS) + die("pam_end failed: %s", pam_strerror(pamh, status)); + + /* If we get to here, the user successfully authenticated. */ + return true; +} +#endif /* HAVE_PAM */ + + +/* +** Try to get a password out of a dbm file. The dbm file should have the +** username for the key and the crypted password as the value. The crypted +** password, if found, is returned as a newly allocated string; otherwise, +** NULL is returned. +*/ +#if !(defined(HAVE_DBM) || defined(HAVE_BDB_DBM)) +static char * +password_dbm(char *user UNUSED, const char *file UNUSED) +{ + return NULL; +} +#else +static char * +password_dbm(char *name, const char *file) +{ + datum key, value; + DBM *database; + char *password; + + database = dbm_open(file, O_RDONLY, 0600); + if (database == NULL) + return NULL; + key.dptr = name; + key.dsize = strlen(name); + value = dbm_fetch(database, key); + if (value.dptr == NULL) { + dbm_close(database); + return NULL; + } + password = xmalloc(value.dsize + 1); + strlcpy(password, value.dptr, value.dsize + 1); + dbm_close(database); + return password; +} +#endif /* HAVE_DBM || HAVE_BDB_DBM */ + + +/* +** Try to get a password out of the system /etc/shadow file. The crypted +** password, if found, is returned as a newly allocated string; otherwise, +** NULL is returned. +*/ +#if !HAVE_GETSPNAM +static char * +password_shadow(const char *user UNUSED) +{ + return NULL; +} +#else +static char * +password_shadow(const char *user) +{ + struct spwd *spwd; + + spwd = getspnam(user); + if (spwd != NULL) + return xstrdup(spwd->sp_pwdp); + return NULL; +} +#endif /* HAVE_GETSPNAM */ + + +/* +** Try to get a password out of a file. The crypted password, if found, is +** returned as a newly allocated string; otherwise, NULL is returned. +*/ +static char * +password_file(const char *username, const char *file) +{ + QIOSTATE *qp; + char *line, *password; + struct cvector *info = NULL; + + qp = QIOopen(file); + if (qp == NULL) + return NULL; + for (line = QIOread(qp); line != NULL; line = QIOread(qp)) { + if (*line == '#' || *line == '\n') + continue; + info = cvector_split(line, ':', info); + if (info->count < 2 || strcmp(info->strings[0], username) != 0) + continue; + password = xstrdup(info->strings[1]); + QIOclose(qp); + cvector_free(info); + return password; + } + if (QIOtoolong(qp)) + die("line too long in %s", file); + if (QIOerror(qp)) + sysdie("error reading %s", file); + QIOclose(qp); + cvector_free(info); + return NULL; +} + + +/* +** Try to get a password out of the system password file. The crypted +** password, if found, is returned as a newly allocated string; otherwise, +** NULL is returned. +*/ +static char * +password_system(const char *username) +{ + struct passwd *pwd; + + pwd = getpwnam(username); + if (pwd != NULL) + return xstrdup(pwd->pw_passwd); + return NULL; +} + + +/* +** Try to get the name of a user's primary group out of the system group +** file. The group, if found, is returned as a newly allocated string; +** otherwise, NULL is returned. If the username is not found, NULL is +** returned. +*/ +static char * +group_system(const char *username) +{ + struct passwd *pwd; + struct group *gr; + + pwd = getpwnam(username); + if (pwd == NULL) + return NULL; + gr = getgrgid(pwd->pw_gid); + if (gr == NULL) + return NULL; + return xstrdup(gr->gr_name); +} + + +/* +** Output username (and group, if desired) in correct return format. +*/ +static void +output_user(const char *username, bool wantgroup) +{ + if (wantgroup) { + char *group = group_system(username); + if (group == NULL) + die("group info for user %s not available", username); + printf("User:%s@%s\n", username, group); + } + else + printf("User:%s\n", username); +} + + +/* +** Main routine. +** +** We handle the variences between systems with #if blocks above, so that +** this code can look fairly clean. +*/ +int +main(int argc, char *argv[]) +{ + enum authtype { AUTH_NONE, AUTH_SHADOW, AUTH_FILE, AUTH_DBM }; + + int opt; + enum authtype type = AUTH_NONE; + bool wantgroup = false; + const char *filename = NULL; + struct auth_info *authinfo = NULL; + char *password = NULL; + + message_program_name = "ckpasswd"; + + while ((opt = getopt(argc, argv, "gf:u:p:" OPT_DBM OPT_SHADOW)) != -1) { + switch (opt) { + case 'g': + if (type == AUTH_DBM || type == AUTH_FILE) + die("-g option is incompatible with -d or -f"); + wantgroup = true; + break; + case 'd': + if (type != AUTH_NONE) + die("only one of -s, -f, or -d allowed"); + if (wantgroup) + die("-g option is incompatible with -d or -f"); + type = AUTH_DBM; + filename = optarg; + break; + case 'f': + if (type != AUTH_NONE) + die("only one of -s, -f, or -d allowed"); + if (wantgroup) + die("-g option is incompatible with -d or -f"); + type = AUTH_FILE; + filename = optarg; + break; + case 's': + if (type != AUTH_NONE) + die("only one of -s, -f, or -d allowed"); + type = AUTH_SHADOW; + break; + case 'u': + if (authinfo == NULL) { + authinfo = xmalloc(sizeof(struct auth_info)); + authinfo->password = NULL; + } + authinfo->username = optarg; + break; + case 'p': + if (authinfo == NULL) { + authinfo = xmalloc(sizeof(struct auth_info)); + authinfo->username = NULL; + } + authinfo->password = optarg; + break; + default: + exit(1); + } + } + if (argc != optind) + die("extra arguments given"); + if (authinfo != NULL && authinfo->username == NULL) + die("-u option is required if -p option is given"); + if (authinfo != NULL && authinfo->password == NULL) + die("-p option is required if -u option is given"); + + /* Unless a username or password was given on the command line, assume + we're being run by nnrpd. */ + if (authinfo == NULL) + authinfo = get_auth_info(stdin); + if (authinfo == NULL) + die("no authentication information from nnrpd"); + if (authinfo->username[0] == '\0') + die("null username"); + + /* Run the appropriate authentication routines. */ + switch (type) { + case AUTH_SHADOW: + password = password_shadow(authinfo->username); + if (password == NULL) + password = password_system(authinfo->username); + break; + case AUTH_FILE: + password = password_file(authinfo->username, filename); + break; + case AUTH_DBM: + password = password_dbm(authinfo->username, filename); + break; + case AUTH_NONE: + if (auth_pam(authinfo->username, authinfo->password)) { + output_user(authinfo->username, wantgroup); + exit(0); + } + password = password_system(authinfo->username); + break; + } + + if (password == NULL) + die("user %s unknown", authinfo->username); + if (strcmp(password, crypt(authinfo->password, password)) != 0) + die("invalid password for user %s", authinfo->username); + + /* The password matched. */ + output_user(authinfo->username, wantgroup); + exit(0); +} diff --git a/authprogs/domain.c b/authprogs/domain.c new file mode 100644 index 0000000..e4e0f4f --- /dev/null +++ b/authprogs/domain.c @@ -0,0 +1,49 @@ +/* $Id: domain.c 7141 2005-03-17 11:42:46Z vinocur $ +** +** Domain authenticator. +** +** Compares the domain of the client connection to the first argument given +** on the command line, and returns the host portion of the connecting host +** as the user if it matches. +*/ + +#include "config.h" +#include "clibrary.h" + +#include "inn/messages.h" +#include "libinn.h" +#include "libauth.h" + +int +main(int argc, char *argv[]) +{ + char *p, *host; + struct res_info *res; + + if (argc != 2) + die("Usage: domain "); + message_program_name = "domain"; + + /* Read the connection information from stdin. */ + res = get_res_info(stdin); + if (res == NULL) + die("did not get ClientHost data from nnrpd"); + host = res->clienthostname; + + /* Check the host against the provided domain. Allow the domain to be + specified both with and without a leading period; if without, make sure + that there is a period right before where it matches in the host. */ + p = strstr(host, argv[1]); + if (p == host) + die("host %s matches the domain exactly", host); + if (p == NULL || (argv[1][0] != '.' && p != host && *(p - 1) != '.')) + die("host %s didn't match domain %s", host, argv[1]); + + /* Peel off the portion of the host before where the provided domain + matches and return it as the user. */ + if (argv[1][0] != '.') + p--; + *p = '\0'; + printf("User:%s\n", host); + return 0; +} diff --git a/authprogs/ident.c b/authprogs/ident.c new file mode 100644 index 0000000..ac728e1 --- /dev/null +++ b/authprogs/ident.c @@ -0,0 +1,179 @@ +/* $Id: ident.c 6135 2003-01-19 01:15:40Z rra $ +** +** Ident authenticator. +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include +#include +#include + +#include "inn/messages.h" +#include "libinn.h" + +#include "libauth.h" + +#define IDENT_PORT 113 + +static void out(int sig UNUSED) { + exit(1); +} + +int main(int argc, char *argv[]) +{ + struct servent *s; + char buf[2048]; + struct res_info *res; + struct sockaddr_in *lsin, *csin; +#ifdef HAVE_INET6 + struct sockaddr_storage *lss; + struct sockaddr_in6 *lsin6, *csin6; +#endif + int sock; + int opt; + int truncate_domain = 0; + char *iter; + char *p; + unsigned int got; + int lport, cport, identport; + char *endstr; + + message_program_name = "ident"; + + xsignal_norestart(SIGALRM,out); + alarm(15); + + s = getservbyname("ident", "tcp"); + if (!s) + identport = IDENT_PORT; + else + identport = ntohs(s->s_port); + + while ((opt = getopt(argc, argv, "p:t")) != -1) { + switch (opt) { + case 'p': + for (iter = optarg; *iter; iter++) + if (*iter < '0' || *iter > '9') + break; + if (*iter) { + /* not entirely numeric */ + s = getservbyname(optarg, "tcp"); + if (s == NULL) + die("cannot getsrvbyname(%s, tcp)", optarg); + identport = s->s_port; + } else + identport = atoi(optarg); + break; + case 't': + truncate_domain = 1; + break; + } + } + + /* read the connection info from stdin */ + res = get_res_info(stdin); + if (res == NULL) + die("did not get client information from nnrpd"); + +#ifdef HAVE_INET6 + lss = (struct sockaddr_storage *)(res->local); + lsin6 = (struct sockaddr_in6 *)(res->local); + csin6 = (struct sockaddr_in6 *)(res->client); + if( lss->ss_family == AF_INET6 ) + { + lport = ntohs( lsin6->sin6_port ); + lsin6->sin6_port = 0; + cport = ntohs( csin6->sin6_port ); + csin6->sin6_port = htons( identport ); + sock = socket(PF_INET6, SOCK_STREAM, 0); + } else +#endif + { + lsin = (struct sockaddr_in *)(res->local); + lport = htons( lsin->sin_port ); + lsin->sin_port = 0; + csin = (struct sockaddr_in *)(res->client); + cport = htons( csin->sin_port ); + csin->sin_port = htons( identport ); + sock = socket(PF_INET, SOCK_STREAM, 0); + } + if (sock < 0) + sysdie("cannot create socket"); + if (bind(sock, res->local, SA_LEN(res->local)) < 0) + sysdie("cannot bind socket"); + if (connect(sock, res->client, SA_LEN(res->local)) < 0) { + if (errno != ECONNREFUSED) + sysdie("cannot connect to ident server"); + else + sysdie("client host does not accept ident connections"); + } + free_res_info(res); + + /* send the request out */ + snprintf(buf, sizeof(buf), "%d , %d\r\n", cport, lport); + opt = xwrite(sock, buf, strlen(buf)); + if (opt < 0) + sysdie("cannot write to ident server"); + + /* get the answer back */ + got = 0; + do { + opt = read(sock, buf+got, sizeof(buf)-got); + if (opt < 0) + sysdie("cannot read from ident server"); + else if (!opt) + die("end of file from ident server before response"); + while (opt--) + if (buf[got] != '\n') + got++; + } while (buf[got] != '\n'); + buf[got] = '\0'; + if (buf[got-1] == '\r') + buf[got-1] = '\0'; + + /* buf now contains the entire ident response. */ + if (!(iter = strchr(buf, ':'))) + /* malformed response */ + die("malformed response \"%s\" from ident server", buf); + iter++; + + while (*iter && ISWHITE(*iter)) + iter++; + endstr = iter; + while (*endstr && *endstr != ':' && !ISWHITE(*endstr)) + endstr++; + if (!*endstr) + /* malformed response */ + die("malformed response \"%s\" from ident server", buf); + if (*endstr != ':') { + *endstr++ = '\0'; + while (*endstr != ':') + endstr++; + } + + *endstr = '\0'; + + if (strcmp(iter, "ERROR") == 0) + die("ident server reported an error"); + else if (strcmp(iter, "USERID") != 0) + die("ident server returned \"%s\", not USERID", iter); + + /* skip the operating system */ + if (!(iter = strchr(endstr+1, ':'))) + exit(1); + + /* everything else is username */ + iter++; + while (*iter && ISWHITE(*iter)) + iter++; + if (!*iter || *iter == '[') + /* null, or encrypted response */ + die("ident response is null or encrypted"); + if ((truncate_domain == 1) && ((p = strchr(iter, '@')) != NULL)) + *p = '\0'; + printf("User:%s\n", iter); + + exit(0); +} diff --git a/authprogs/libauth.c b/authprogs/libauth.c new file mode 100644 index 0000000..c99dcd9 --- /dev/null +++ b/authprogs/libauth.c @@ -0,0 +1,216 @@ +/* $Id: libauth.c 7500 2006-03-20 01:52:44Z eagle $ +** +** Common code for authenticators and resolvers. +** +** Collects common code to read information from nnrpd that should be done +** the same for all authenticators, and common code to get information about +** the incoming connection. +*/ + +#include "config.h" +#include "clibrary.h" +#include "libinn.h" + +#include "libauth.h" +#include "inn/messages.h" + +#define NAMESTR "ClientAuthname: " +#define PASSSTR "ClientPassword: " + +#define CLIHOST "ClientHost: " +#define CLIIP "ClientIP: " +#define CLIPORT "ClientPort: " +#define LOCIP "LocalIP: " +#define LOCPORT "LocalPort: " + +#ifdef HAVE_INET6 +# include +#endif + +/* Main loop. If res != NULL, expects to get resolver info from nnrpd, and + writes it into the struct. If auth != NULL, expects to get authentication + info from nnrpd, and writes it into the struct. */ + +static bool +get_connection_info(FILE *stream, struct res_info *res, struct auth_info *auth) +{ + char buff[SMBUF]; + size_t length; + char *cip = NULL, *sip = NULL, *cport = NULL, *sport = NULL; +#ifdef HAVE_INET6 + struct addrinfo *r, hints; +#else + struct sockaddr_in *loc_sin, *cli_sin; +#endif + + /* Zero fields first (anything remaining NULL after is missing data) */ + if (res != NULL) { + res->clienthostname = NULL; + res->client = NULL; + res->local = NULL; + } + if (auth != NULL) { + auth->username = NULL; + auth->password = NULL; + } + + /* Read input from nnrpd a line at a time, stripping \r\n. */ + while (fgets(buff, sizeof(buff), stream) != NULL) { + length = strlen(buff); + if (length == 0 || buff[length - 1] != '\n') + goto error; + buff[length - 1] = '\0'; + if (length > 1 && buff[length - 2] == '\r') + buff[length - 2] = '\0'; + + /* Parse */ + if (strncmp(buff, ".", 2) == 0) + break; + else if (auth != NULL && strncmp(buff, NAMESTR, strlen(NAMESTR)) == 0) + auth->username = xstrdup(buff + strlen(NAMESTR)); + else if (auth != NULL && strncmp(buff, PASSSTR, strlen(PASSSTR)) == 0) + auth->password = xstrdup(buff + strlen(PASSSTR)); + else if (res != NULL && strncmp(buff, CLIHOST, strlen(CLIHOST)) == 0) + res->clienthostname = xstrdup(buff + strlen(CLIHOST)); + else if (res != NULL && strncmp(buff, CLIIP, strlen(CLIIP)) == 0) + cip = xstrdup(buff + strlen(CLIIP)); + else if (res != NULL && strncmp(buff, CLIPORT, strlen(CLIPORT)) == 0) + cport = xstrdup(buff + strlen(CLIPORT)); + else if (res != NULL && strncmp(buff, LOCIP, strlen(LOCIP)) == 0) + sip = xstrdup(buff + strlen(LOCIP)); + else if (res != NULL && strncmp(buff, LOCPORT, strlen(LOCPORT)) == 0) + sport = xstrdup(buff + strlen(LOCPORT)); + else { + /**** We just ignore excess fields for now ****/ + + /* warn("libauth: unexpected data from nnrpd: \"%s\"", buff); */ + /* goto error; */ + } + } + + /* If some field is missing, free the rest and error out. */ + if (auth != NULL && (auth->username == NULL || auth->password == NULL)) { + warn("libauth: requested authenticator data not sent by nnrpd"); + goto error; + } + if (res != NULL && (res->clienthostname == NULL || cip == NULL || + cport == NULL || sip == NULL || sport == NULL)) { + warn("libauth: requested resolver data not sent by nnrpd"); + goto error; + } + + /* Generate sockaddrs from IP and port strings */ + if (res != NULL) { +#ifdef HAVE_INET6 + /* sockaddr_in6 may be overkill for PF_INET case, but oh well */ + res->client = xcalloc(1, sizeof(struct sockaddr_in6)); + res->local = xcalloc(1, sizeof(struct sockaddr_in6)); + + memset( &hints, 0, sizeof( hints ) ); + hints.ai_flags = AI_NUMERICHOST; + hints.ai_socktype = SOCK_STREAM; + + hints.ai_family = strchr( cip, ':' ) != NULL ? PF_INET6 : PF_INET; + if( getaddrinfo( cip, cport, &hints, &r ) != 0) + goto error; + if( r->ai_addrlen > sizeof(struct sockaddr_in6) ) + goto error; + memcpy( res->client, r->ai_addr, r->ai_addrlen ); + freeaddrinfo( r ); + + hints.ai_family = strchr( sip, ':' ) != NULL ? PF_INET6 : PF_INET; + if( getaddrinfo( sip, sport, &hints, &r ) != 0) + goto error; + if( r->ai_addrlen > sizeof(struct sockaddr_in6) ) + goto error; + memcpy( res->local, r->ai_addr, r->ai_addrlen ); + freeaddrinfo( r ); +#else + res->client = xcalloc(1, sizeof(struct sockaddr_in)); + res->local = xcalloc(1, sizeof(struct sockaddr_in)); + + cli_sin = (struct sockaddr_in *)(res->client); + loc_sin = (struct sockaddr_in *)(res->local); + cli_sin->sin_family = AF_INET; + if (!inet_aton(cip, &cli_sin->sin_addr)) + goto error; + cli_sin->sin_port = htons( atoi(cport) ); + + loc_sin->sin_family = AF_INET; + if (!inet_aton(sip, &loc_sin->sin_addr)) + goto error; + loc_sin->sin_port = htons( atoi(sport) ); + +# ifdef HAVE_SOCKADDR_LEN + cli_sin->sin_len = sizeof(struct sockaddr_in); + loc_sin->sin_len = sizeof(struct sockaddr_in); +# endif +#endif + + free(sip); + free(sport); + free(cip); + free(cport); + } + + return true; + +error: + if (auth != NULL && auth->username != NULL) free(auth->username); + if (auth != NULL && auth->password != NULL) free(auth->password); + if (res != NULL && res->clienthostname != NULL) free(res->clienthostname); + if (res != NULL && res->client != NULL) free(res->client); + if (res != NULL && res->local != NULL) free(res->local); + if (sip != NULL) free(sip); + if (sport != NULL) free(sport); + if (cip != NULL) free(cip); + if (cport != NULL) free(cport); + return false; +} + + +/* Wrappers to read information from nnrpd, returning an allocated struct on + success. */ + +struct res_info * +get_res_info(FILE *stream) { + struct res_info *res = xmalloc(sizeof(struct res_info)); + + if(get_connection_info(stream, res, NULL)) + return res; + + free(res); + return NULL; +} + + +struct auth_info * +get_auth_info(FILE *stream) { + struct auth_info *auth = xmalloc(sizeof(struct auth_info)); + + if(get_connection_info(stream, NULL, auth)) + return auth; + + free(auth); + return NULL; +} + +void +free_res_info(struct res_info *res) { + if(res == NULL) + return; + if(res->client != NULL) free(res->client); + if(res->local != NULL) free(res->local); + if(res->clienthostname != NULL) free(res->clienthostname); + free(res); +} + +void +free_auth_info(struct auth_info *auth) { + if(auth == NULL) + return; + if(auth->username != NULL) free(auth->username); + if(auth->password != NULL) free(auth->password); + free(auth); +} + diff --git a/authprogs/libauth.h b/authprogs/libauth.h new file mode 100644 index 0000000..faa6520 --- /dev/null +++ b/authprogs/libauth.h @@ -0,0 +1,39 @@ +/* +** +** Common headers for authenticators and resolvers. +** +*/ + +#include "config.h" +#include "portable/socket.h" + +/* Holds the resolver information from nnrpd. */ +struct res_info { + struct sockaddr *client; + struct sockaddr *local; + char *clienthostname; +}; + +/* Holds the authentication information from nnrpd. */ +struct auth_info { + char *username; + char *password; +}; + +/* + * Reads connection information from a file descriptor (normally stdin, when + * talking to nnrpd) and returns a new res_info or auth_info struct, or + * returns NULL on failure. Note that the fields will never be NULL; if the + * corresponding information is missing, it is an error (which will be + * logged and NULL will be returned). The client is responsible for freeing + * the struct and its fields; this can be done by calling the appropriate + * destruction function below. + */ + +extern struct auth_info *get_auth_info(FILE *); +extern struct res_info *get_res_info (FILE *); + +extern void free_auth_info(struct auth_info*); +extern void free_res_info (struct res_info*); + + diff --git a/authprogs/radius.c b/authprogs/radius.c new file mode 100644 index 0000000..e6da348 --- /dev/null +++ b/authprogs/radius.c @@ -0,0 +1,564 @@ +/* $Id: radius.c 7745 2008-04-06 10:18:54Z iulius $ +** +** Authenticate a user against a remote radius server. +*/ + +#include "config.h" +#include "clibrary.h" +#include "portable/time.h" +#include +#include +#include +#include +#include + +/* Needed on AIX 4.1 to get fd_set and friends. */ +#if HAVE_SYS_SELECT_H +# include +#endif + +#include "inn/innconf.h" +#include "inn/md5.h" +#include "inn/messages.h" +#include "libinn.h" +#include "nntp.h" +#include "paths.h" +#include "conffile.h" + +#include "libauth.h" + +#define RADIUS_LOCAL_PORT NNTP_PORT + +#define AUTH_VECTOR_LEN 16 + +typedef struct _auth_req { + unsigned char code; + unsigned char id; + unsigned short length; + unsigned char vector[AUTH_VECTOR_LEN]; + unsigned char data[NNTP_STRLEN*2]; + int datalen; +} auth_req; + +typedef struct _rad_config_t { + char *secret; /* pseudo encryption thingy secret that radius uses */ + + char *radhost; /* parameters for talking to the remote radius sever */ + int radport; + char *lochost; + int locport; + + char *prefix, *suffix; /* futz with the username, if necessary */ + int ignore_source; + + struct _rad_config_t *next; /* point to any additional servers */ +} rad_config_t; + +typedef struct _sending_t { + auth_req req; + int reqlen; + struct sockaddr_in sinr; + struct _sending_t *next; +} sending_t; + +#define RADlbrace 1 +#define RADrbrace 2 +#define RADserver 10 +#define RADhost 11 +#define RADsecret 12 +#define RADport 13 +#define RADlochost 14 +#define RADlocport 15 +#define RADprefix 16 +#define RADsuffix 17 +#define RADsource 18 + +static CONFTOKEN radtoks[] = { + { RADlbrace, "{" }, + { RADrbrace, "}" }, + { RADserver, "server" }, + { RADhost, "radhost:" }, + { RADsecret, "secret:" }, + { RADport, "radport:" }, + { RADlochost, "lochost:" }, + { RADlocport, "locport:" }, + { RADprefix, "prefix:" }, + { RADsuffix, "suffix:" }, + { RADsource, "ignore-source:" }, + { 0, 0 } +}; + +static rad_config_t *get_radconf(void) +{ + rad_config_t *new; + + new = xcalloc(1, sizeof(rad_config_t)); + new->next = NULL; + + return new; +} + +static int read_config(char *authfile, rad_config_t *radconf) +{ + int inbrace; + rad_config_t *radconfig=NULL; + CONFFILE *file; + CONFTOKEN *token; + char *server; + int type; + char *iter; + + if ((file = CONFfopen(authfile)) == NULL) + sysdie("cannot open config file %s", authfile); + + inbrace = 0; + while ((token = CONFgettoken(radtoks, file)) != NULL) { + if (!inbrace) { + if (token->type != RADserver) + die("expected server keyword on line %d", file->lineno); + if ((token = CONFgettoken(0, file)) == NULL) + die("expected server name on line %d", file->lineno); + server = xstrdup(token->name); + if ((token = CONFgettoken(radtoks, file)) == NULL + || token->type != RADlbrace) + die("expected { on line %d", file->lineno); + inbrace = 1; + + if (radconfig == NULL) + radconfig = radconf; + else { + radconfig->next = get_radconf(); + radconfig = radconfig->next; + } + } + else { + type = token->type; + if (type == RADrbrace) + inbrace = 0; + else { + if ((token = CONFgettoken(0, file)) == NULL) + die("keyword with no value on line %d", file->lineno); + iter = token->name; + + /* what are we setting? */ + switch(type) { + case RADsecret: + if (radconfig->secret) continue; + radconfig->secret = xstrdup(iter); + break; + case RADhost: + if (radconfig->radhost) continue; + radconfig->radhost = xstrdup(iter); + break; + case RADport: + if (radconfig->radport) continue; + radconfig->radport = atoi(iter); + break; + case RADlochost: + if (radconfig->lochost) continue; + radconfig->lochost = xstrdup(iter); + break; + case RADlocport: + if (radconfig->locport) continue; + radconfig->locport = atoi(iter); + break; + case RADprefix: + if (radconfig->prefix) continue; + radconfig->prefix = xstrdup(iter); + break; + case RADsuffix: + if (radconfig->suffix) continue; + radconfig->suffix = xstrdup(iter); + break; + case RADsource: + if (!strcasecmp(iter, "true")) + radconfig->ignore_source = 1; + else if (!strcasecmp(iter, "false")) + radconfig->ignore_source = 0; + else + die("expected true or false after ignore-source on line %d", + file->lineno); + break; + default: + die("unknown keyword on line %d", file->lineno); + } + } + } + } + + CONFfclose(file); + + if (!radconf->radhost) + die("no radius host specified"); + else if (!radconf->secret) + die("no shared secret with radius host specified"); + + return(0); +} + +#define PW_AUTH_UDP_PORT 1645 + +#define PW_AUTHENTICATION_REQUEST 1 +#define PW_AUTHENTICATION_ACK 2 +#define PW_AUTHENTICATION_REJECT 3 + +#define PW_USER_NAME 1 +#define PW_PASSWORD 2 + +#define PW_SERVICE_TYPE 6 +#define PW_SERVICE_AUTH_ONLY 8 + +#define RAD_NAS_IP_ADDRESS 4 /* IP address */ +#define RAD_NAS_PORT 5 /* Integer */ + +static void req_copyto (auth_req to, sending_t *from) +{ + to = from->req; +} + +static void req_copyfrom (sending_t *to, auth_req from) +{ + to->req = from; +} + +static int rad_auth(rad_config_t *radconfig, char *uname, char *pass) +{ + auth_req req; + int i, j, jlen, passstart; + unsigned char secbuf[128]; + char hostname[SMBUF]; + unsigned char digest[MD5_DIGESTSIZE]; + struct timeval seed; + struct sockaddr_in sinl; + int sock; + struct hostent *hent; + int passlen; + time_t now, end; + struct timeval tmout; + int got; + fd_set rdfds; + uint32_t nvalue; + socklen_t slen; + int authtries= 3; /* number of times to try reaching the radius server */ + rad_config_t *config; + sending_t *reqtop, *sreq, *new; + int done; + + /* set up the linked list */ + config = radconfig; + + if (config == NULL) { + warn("no configuration file"); + return(-2); + } else { + /* setting sreq to NULL guarantees reqtop will be properly set later */ + sreq = NULL; + reqtop = NULL; + } + + while (config != NULL){ + new = xmalloc(sizeof(sending_t)); + new->next = NULL; + + if (sreq == NULL){ + reqtop = new; + sreq = new; + } else { + sreq->next = new; + sreq = sreq->next; + } + req_copyto(req, sreq); + + /* first, build the sockaddrs */ + memset(&sinl, '\0', sizeof(sinl)); + memset(&sreq->sinr, '\0', sizeof(sreq->sinr)); + sinl.sin_family = AF_INET; + sreq->sinr.sin_family = AF_INET; + if (config->lochost == NULL) { + if (gethostname(hostname, sizeof(hostname)) != 0) { + syswarn("cannot get local hostname"); + return(-2); + } + config->lochost = xstrdup(hostname); + } + if (config->lochost) { + if (inet_aton(config->lochost, &sinl.sin_addr) != 1) { + if ((hent = gethostbyname(config->lochost)) == NULL) { + warn("cannot gethostbyname lochost %s", config->lochost); + return(-2); + } + memcpy(&sinl.sin_addr.s_addr, hent->h_addr, + sizeof(struct in_addr)); + } + } + if (inet_aton(config->radhost, &sreq->sinr.sin_addr) != 1) { + if ((hent = gethostbyname(config->radhost)) == NULL) { + warn("cannot gethostbyname radhost %s", config->radhost); + return(-2); + } + memcpy(&sreq->sinr.sin_addr.s_addr, hent->h_addr_list[0], + sizeof(struct in_addr)); + } + + if (config->radport) + sreq->sinr.sin_port = htons(config->radport); + else + sreq->sinr.sin_port = htons(PW_AUTH_UDP_PORT); + + /* seed the random number generator for the auth vector */ + gettimeofday(&seed, 0); + srandom((unsigned) seed.tv_sec+seed.tv_usec); + /* build the visible part of the auth vector randomly */ + for (i = 0; i < AUTH_VECTOR_LEN; i++) + req.vector[i] = random() % 256; + strlcpy((char *) secbuf, config->secret, sizeof(secbuf)); + memcpy(secbuf+strlen(config->secret), req.vector, AUTH_VECTOR_LEN); + md5_hash(secbuf, strlen(config->secret)+AUTH_VECTOR_LEN, digest); + /* fill in the auth_req data */ + req.code = PW_AUTHENTICATION_REQUEST; + req.id = 0; + + /* bracket the username in the configured prefix/suffix */ + req.data[0] = PW_USER_NAME; + req.data[1] = 2; + req.data[2] = '\0'; + if (config->prefix) { + req.data[1] += strlen(config->prefix); + strlcat((char *) &req.data[2], config->prefix, sizeof(req.data) - 2); + } + req.data[1] += strlen(uname); + strlcat((char *)&req.data[2], uname, sizeof(req.data) - 2); + if (!strchr(uname, '@') && config->suffix) { + req.data[1] += strlen(config->suffix); + strlcat((char *)&req.data[2], config->suffix, sizeof(req.data) - 2); + } + req.datalen = req.data[1]; + + /* set the password */ + passstart = req.datalen; + req.data[req.datalen] = PW_PASSWORD; + /* Null pad the password */ + passlen = (strlen(pass) + 15) / 16; + passlen *= 16; + req.data[req.datalen+1] = passlen+2; + strlcpy((char *)&req.data[req.datalen+2], pass, + sizeof(req.data) - req.datalen - 2); + passlen -= strlen(pass); + while (passlen--) + req.data[req.datalen+passlen+2+strlen(pass)] = '\0'; + req.datalen += req.data[req.datalen+1]; + + /* Add NAS_PORT and NAS_IP_ADDRESS into request */ + if ((nvalue = config->locport) == 0) + nvalue = RADIUS_LOCAL_PORT; + req.data[req.datalen++] = RAD_NAS_PORT; + req.data[req.datalen++] = sizeof(nvalue) + 2; + nvalue = htonl(nvalue); + memcpy(req.data + req.datalen, &nvalue, sizeof(nvalue)); + req.datalen += sizeof(nvalue); + req.data[req.datalen++] = RAD_NAS_IP_ADDRESS; + req.data[req.datalen++] = sizeof(struct in_addr) + 2; + memcpy(req.data + req.datalen, &sinl.sin_addr.s_addr, + sizeof(struct in_addr)); + req.datalen += sizeof(struct in_addr); + + /* we're only doing authentication */ + req.data[req.datalen] = PW_SERVICE_TYPE; + req.data[req.datalen+1] = 6; + req.data[req.datalen+2] = (PW_SERVICE_AUTH_ONLY >> 24) & 0x000000ff; + req.data[req.datalen+3] = (PW_SERVICE_AUTH_ONLY >> 16) & 0x000000ff; + req.data[req.datalen+4] = (PW_SERVICE_AUTH_ONLY >> 8) & 0x000000ff; + req.data[req.datalen+5] = PW_SERVICE_AUTH_ONLY & 0x000000ff; + req.datalen += req.data[req.datalen+1]; + + /* filled in the data, now we know what the actual length is. */ + req.length = 4+AUTH_VECTOR_LEN+req.datalen; + + /* "encrypt" the password */ + for (i = 0; i < req.data[passstart+1]-2; i += sizeof(HASH)) { + jlen = sizeof(HASH); + if (req.data[passstart+1]-(unsigned)i-2 < sizeof(HASH)) + jlen = req.data[passstart+1]-i-2; + for (j = 0; j < jlen; j++) + req.data[passstart+2+i+j] ^= digest[j]; + if (jlen == sizeof(HASH)) { + /* Recalculate the digest from the HASHed previous */ + strlcpy((char *) secbuf, config->secret, sizeof(secbuf)); + memcpy(secbuf+strlen(config->secret), &req.data[passstart+2+i], + sizeof(HASH)); + md5_hash(secbuf, strlen(config->secret)+sizeof(HASH), digest); + } + } + sreq->reqlen = req.length; + req.length = htons(req.length); + + req_copyfrom(sreq, req); + + /* Go to the next record in the list */ + config = config->next; + } + + /* YAYY! The auth_req is ready to go! Build the reply socket and send out + * the message. */ + + /* now, build the sockets */ + if ((sock = socket(AF_INET, SOCK_DGRAM, 0)) < 0) { + syswarn("cannot build reply socket"); + return(-1); + } + if (bind(sock, (struct sockaddr*) &sinl, sizeof(sinl)) < 0) { + syswarn("cannot bind reply socket"); + close(sock); + return(-1); + } + + for(done = 0; authtries > 0 && !done; authtries--) { + for (config = radconfig, sreq = reqtop; sreq != NULL && !done; + config = config->next, sreq = sreq->next){ + req_copyto(req, sreq); + + /* send out the packet and wait for reply. */ + if (sendto(sock, (char *)&req, sreq->reqlen, 0, + (struct sockaddr*) &sreq->sinr, + sizeof (struct sockaddr_in)) < 0) { + syswarn("cannot send auth_reg"); + close(sock); + return(-1); + } + + /* wait 5 seconds maximum for a radius reply. */ + now = time(0); + end = now+5; + tmout.tv_sec = 6; + tmout.tv_usec = 0; + FD_ZERO(&rdfds); + /* store the old vector to verify next checksum */ + memcpy(secbuf+sizeof(req.vector), req.vector, sizeof(req.vector)); + FD_SET(sock, &rdfds); + got = select(sock+1, &rdfds, 0, 0, &tmout); + if (got < 0) { + syswarn("cannot not select"); + break; + } else if (got == 0) { + /* timer ran out */ + now = time(0); + tmout.tv_sec = end - now + 1; + tmout.tv_usec = 0; + continue; + } + slen = sizeof(sinl); + if ((jlen = recvfrom(sock, (char *)&req, sizeof(req)-sizeof(int), 0, + (struct sockaddr*) &sinl, &slen)) < 0) { + syswarn("cannot recvfrom"); + break; + } + if (!config->ignore_source) { + if (sinl.sin_addr.s_addr != sreq->sinr.sin_addr.s_addr || + (sinl.sin_port != sreq->sinr.sin_port)) { + warn("received unexpected UDP packet from %s:%d", + inet_ntoa(sinl.sin_addr), ntohs(sinl.sin_port)); + continue; + } + } + sreq->reqlen = ntohs(req.length); + if (jlen < 4+AUTH_VECTOR_LEN || jlen != sreq->reqlen) { + warn("received badly-sized packet"); + continue; + } + /* verify the checksum */ + memcpy(((char*)&req)+sreq->reqlen, config->secret, strlen(config->secret)); + memcpy(secbuf, req.vector, sizeof(req.vector)); + memcpy(req.vector, secbuf+sizeof(req.vector), sizeof(req.vector)); + md5_hash((unsigned char *)&req, strlen(config->secret)+sreq->reqlen, + digest); + if (memcmp(digest, secbuf, sizeof(HASH)) != 0) { + warn("checksum didn't match"); + continue; + } + /* FINALLY! Got back a known-good packet. See if we're in. */ + close(sock); + return (req.code == PW_AUTHENTICATION_ACK) ? 0 : -1; + done = 1; + req_copyfrom(sreq, req); + break; + } + } + if (authtries == 0) + warn("cannot talk to remote radius server %s:%d", + inet_ntoa(sreq->sinr.sin_addr), ntohs(sreq->sinr.sin_port)); + return(-2); +} + +#define RAD_HAVE_HOST 1 +#define RAD_HAVE_PORT 2 +#define RAD_HAVE_PREFIX 4 +#define RAD_HAVE_SUFFIX 8 +#define RAD_HAVE_LOCHOST 16 +#define RAD_HAVE_LOCPORT 32 + +int main(int argc, char *argv[]) +{ + int opt; + int havefile, haveother; + struct auth_info *authinfo; + rad_config_t radconfig; + int retval; + char *radius_config; + + message_program_name = "radius"; + + if (!innconf_read(NULL)) + exit(1); + + memset(&radconfig, '\0', sizeof(rad_config_t)); + haveother = havefile = 0; + + while ((opt = getopt(argc, argv, "f:h")) != -1) { + switch (opt) { + case 'f': + if (haveother) + die("-f flag after another flag"); + if (!havefile) { + /* override the standard config completely if the user + * specifies an alternate config file */ + memset(&radconfig, '\0', sizeof(rad_config_t)); + havefile = 1; + } + read_config(optarg, &radconfig); + break; + case 'h': + printf("Usage: radius [-f config]\n"); + exit(0); + } + } + if (argc != optind) + exit(2); + if (!havefile) { + radius_config = concatpath(innconf->pathetc, _PATH_RADIUS_CONFIG); + read_config(radius_config, &radconfig); + + free(radius_config); + } + + authinfo = get_auth_info(stdin); + if (authinfo == NULL) + die("failed getting auth info"); + if (authinfo->username[0] == '\0') + die("empty username"); + + /* got username and password, check that they're valid */ + + retval = rad_auth(&radconfig, authinfo->username, authinfo->password); + if (retval == -1) + die("user %s password doesn't match", authinfo->username); + else if (retval == -2) + /* couldn't talk to the radius server.. output logged above. */ + exit(1); + else if (retval != 0) + die("unexpected return code from authentication function: %d", + retval); + + /* radius password matches! */ + printf("User:%s\n", authinfo->username); + exit(0); +} diff --git a/authprogs/smbval/Makefile b/authprogs/smbval/Makefile new file mode 100644 index 0000000..6bb6ef1 --- /dev/null +++ b/authprogs/smbval/Makefile @@ -0,0 +1,51 @@ +## $Id: Makefile 5789 2002-09-29 23:34:26Z rra $ + +include ../../Makefile.global + +top = ../.. +CFLAGS = $(GCFLAGS) + +ALL = smbvalid.a + +SOURCES = rfcnb-io.c rfcnb-util.c session.c smbdes.c \ + smbencrypt.c smblib-util.c smblib.c valid.c + +OBJECTS = $(SOURCES:.c=.o) + +all: $(ALL) + +warnings: + $(MAKE) COPT='$(WARNINGS)' all + +smbvalid.a: $(OBJECTS) + ar rc $@ $(OBJECTS) + $(RANLIB) $@ + +clobber clean distclean: + rm -f *.o smbvalid.a + +depend: Makefile $(SOURCES) + $(MAKEDEPEND) '$(CFLAGS)' $(SOURCES) + +# DO NOT DELETE THIS LINE -- make depend depends on it. +rfcnb-io.o: rfcnb-io.c ../../include/config.h \ + ../../include/inn/defines.h ../../include/clibrary.h rfcnb-priv.h \ + rfcnb-error.h rfcnb-common.h byteorder.h rfcnb-util.h rfcnb-io.h +rfcnb-util.o: rfcnb-util.c ../../include/config.h \ + ../../include/inn/defines.h ../../include/clibrary.h rfcnb-priv.h \ + rfcnb-error.h rfcnb-common.h byteorder.h rfcnb-util.h rfcnb-io.h +session.o: session.c ../../include/config.h \ + ../../include/inn/defines.h ../../include/clibrary.h rfcnb-priv.h \ + rfcnb-error.h rfcnb-common.h byteorder.h rfcnb-util.h +smbdes.o: smbdes.c +smbencrypt.o: smbencrypt.c ../../include/config.h \ + ../../include/inn/defines.h ../../include/clibrary.h smblib-priv.h \ + smblib-common.h byteorder.h +smblib-util.o: smblib-util.c ../../include/config.h \ + ../../include/inn/defines.h ../../include/clibrary.h smblib-priv.h \ + smblib-common.h byteorder.h rfcnb.h rfcnb-error.h rfcnb-common.h +smblib.o: smblib.c ../../include/config.h ../../include/inn/defines.h \ + ../../include/clibrary.h smblib-priv.h smblib-common.h byteorder.h \ + rfcnb.h rfcnb-error.h rfcnb-common.h +valid.o: valid.c ../../include/config.h ../../include/inn/defines.h \ + smblib-priv.h smblib-common.h byteorder.h valid.h diff --git a/authprogs/smbval/byteorder.h b/authprogs/smbval/byteorder.h new file mode 100644 index 0000000..9bea2b2 --- /dev/null +++ b/authprogs/smbval/byteorder.h @@ -0,0 +1,70 @@ +/* + Unix SMB/Netbios implementation. + Version 1.9. + SMB Byte handling + Copyright (C) Andrew Tridgell 1992-1995 + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +/* + This file implements macros for machine independent short and + int manipulation +*/ + +#undef CAREFUL_ALIGNMENT + +/* we know that the 386 can handle misalignment and has the "right" + byteorder */ +#ifdef __i386__ +#define CAREFUL_ALIGNMENT 0 +#endif + +#ifndef CAREFUL_ALIGNMENT +#define CAREFUL_ALIGNMENT 1 +#endif + +#define CVAL(buf,pos) (((unsigned char *)(buf))[pos]) +#define PVAL(buf,pos) ((unsigned)CVAL(buf,pos)) +#define SCVAL(buf,pos,val) (CVAL(buf,pos) = (val)) + + +#if CAREFUL_ALIGNMENT +#define SVAL(buf,pos) (PVAL(buf,pos)|PVAL(buf,(pos)+1)<<8) +#define IVAL(buf,pos) (SVAL(buf,pos)|SVAL(buf,(pos)+2)<<16) +#define SSVALX(buf,pos,val) (CVAL(buf,pos)=(val)&0xFF,CVAL(buf,pos+1)=(val)>>8) +#define SIVALX(buf,pos,val) (SSVALX(buf,pos,val&0xFFFF),SSVALX(buf,pos+2,val>>16)) +#define SVALS(buf,pos) ((int16)SVAL(buf,pos)) +#define IVALS(buf,pos) ((int32)IVAL(buf,pos)) +#define SSVAL(buf,pos,val) SSVALX((buf),(pos),((uint16)(val))) +#define SIVAL(buf,pos,val) SIVALX((buf),(pos),((uint32)(val))) +#define SSVALS(buf,pos,val) SSVALX((buf),(pos),((int16)(val))) +#define SIVALS(buf,pos,val) SIVALX((buf),(pos),((int32)(val))) +#else +/* this handles things for architectures like the 386 that can handle + alignment errors */ +/* + WARNING: This section is dependent on the length of int16 and int32 + being correct +*/ +#define SVAL(buf,pos) (*(uint16 *)((char *)(buf) + (pos))) +#define IVAL(buf,pos) (*(uint32 *)((char *)(buf) + (pos))) +#define SVALS(buf,pos) (*(int16 *)((char *)(buf) + (pos))) +#define IVALS(buf,pos) (*(int32 *)((char *)(buf) + (pos))) +#define SSVAL(buf,pos,val) SVAL(buf,pos)=((uint16)(val)) +#define SIVAL(buf,pos,val) IVAL(buf,pos)=((uint32)(val)) +#define SSVALS(buf,pos,val) SVALS(buf,pos)=((int16)(val)) +#define SIVALS(buf,pos,val) IVALS(buf,pos)=((int32)(val)) +#endif diff --git a/authprogs/smbval/rfcnb-common.h b/authprogs/smbval/rfcnb-common.h new file mode 100644 index 0000000..ba09d7c --- /dev/null +++ b/authprogs/smbval/rfcnb-common.h @@ -0,0 +1,36 @@ +/* UNIX RFCNB (RFC1001/RFC1002) NetBIOS implementation + + Version 1.0 + RFCNB Common Structures etc Defines + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +/* A data structure we need */ + +typedef struct RFCNB_Pkt { + + char * data; /* The data in this portion */ + int len; + struct RFCNB_Pkt *next; + +} RFCNB_Pkt; + +void RFCNB_Free_Pkt(struct RFCNB_Pkt *pkt); diff --git a/authprogs/smbval/rfcnb-error.h b/authprogs/smbval/rfcnb-error.h new file mode 100644 index 0000000..afa1328 --- /dev/null +++ b/authprogs/smbval/rfcnb-error.h @@ -0,0 +1,48 @@ +/* UNIX RFCNB (RFC1001/RFC1002) NetBIOS implementation + + Version 1.0 + RFCNB Error Response Defines + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +/* Error responses */ + +#define RFCNBE_Bad -1 /* Bad response */ +#define RFCNBE_OK 0 + +/* these should follow the spec ... is there one ?*/ + +#define RFCNBE_NoSpace 1 /* Could not allocate space for a struct */ +#define RFCNBE_BadName 2 /* Could not translate a name */ +#define RFCNBE_BadRead 3 /* Read sys call failed */ +#define RFCNBE_BadWrite 4 /* Write Sys call failed */ +#define RFCNBE_ProtErr 5 /* Protocol Error */ +#define RFCNBE_ConGone 6 /* Connection dropped */ +#define RFCNBE_BadHandle 7 /* Handle passed was bad */ +#define RFCNBE_BadSocket 8 /* Problems creating socket */ +#define RFCNBE_ConnectFailed 9 /* Connect failed */ +#define RFCNBE_CallRejNLOCN 10 /* Call rejected, not listening on CN */ +#define RFCNBE_CallRejNLFCN 11 /* Call rejected, not listening for CN */ +#define RFCNBE_CallRejCNNP 12 /* Call rejected, called name not present */ +#define RFCNBE_CallRejInfRes 13/* Call rejetced, name ok, no resources */ +#define RFCNBE_CallRejUnSpec 14/* Call rejected, unspecified error */ +#define RFCNBE_BadParam 15/* Bad parameters passed ... */ +#define RFCNBE_Timeout 16/* IO Timed out */ diff --git a/authprogs/smbval/rfcnb-io.c b/authprogs/smbval/rfcnb-io.c new file mode 100644 index 0000000..3f030fa --- /dev/null +++ b/authprogs/smbval/rfcnb-io.c @@ -0,0 +1,310 @@ +/* UNIX RFCNB (RFC1001/RFC1002) NEtBIOS implementation + + Version 1.0 + RFCNB IO Routines ... + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include + +#include "rfcnb-priv.h" +#include "rfcnb-util.h" +#include "rfcnb-io.h" + +int RFCNB_Timeout = 0; /* Timeout in seconds ... */ + +/* Discard the rest of an incoming packet as we do not have space for it + in the buffer we allocated or were passed ... */ + +int RFCNB_Discard_Rest(struct RFCNB_Con *con, int len) + +{ char temp[100]; /* Read into here */ + int rest, this_read, bytes_read; + + /* len is the amount we should read */ + + rest = len; + + while (rest > 0) { + + this_read = (rest > sizeof(temp)?sizeof(temp):rest); + + bytes_read = read(con -> fd, temp, this_read); + + if (bytes_read <= 0) { /* Error so return */ + + if (bytes_read < 0) + RFCNB_errno = RFCNBE_BadRead; + else + RFCNB_errno = RFCNBE_ConGone; + + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + + } + + rest = rest - bytes_read; + + } + + return(0); + +} + + +/* Send an RFCNB packet to the connection. + + We just send each of the blocks linked together ... + + If we can, try to send it as one iovec ... + +*/ + +int RFCNB_Put_Pkt(struct RFCNB_Con *con, struct RFCNB_Pkt *pkt, int len) + +{ int len_sent, tot_sent, this_len; + struct RFCNB_Pkt *pkt_ptr; + char *this_data; + int i; + struct iovec io_list[10]; /* We should never have more */ + /* If we do, this will blow up ...*/ + + /* Try to send the data ... We only send as many bytes as len claims */ + /* We should try to stuff it into an IOVEC and send as one write */ + + + pkt_ptr = pkt; + len_sent = tot_sent = 0; /* Nothing sent so far */ + i = 0; + + while ((pkt_ptr != NULL) & (i < 10)) { /* Watch that magic number! */ + + this_len = pkt_ptr -> len; + this_data = pkt_ptr -> data; + if ((tot_sent + this_len) > len) + this_len = len - tot_sent; /* Adjust so we don't send too much */ + + /* Now plug into the iovec ... */ + + io_list[i].iov_len = this_len; + io_list[i].iov_base = this_data; + i++; + + tot_sent += this_len; + + if (tot_sent == len) break; /* Let's not send too much */ + + pkt_ptr = pkt_ptr -> next; + + } + + /* Set up an alarm if timeouts are set ... */ + + if (RFCNB_Timeout > 0) + alarm(RFCNB_Timeout); + + if ((len_sent = writev(con -> fd, io_list, i)) < 0) { /* An error */ + + con -> rfc_errno = errno; + if (errno == EINTR) /* We were interrupted ... */ + RFCNB_errno = RFCNBE_Timeout; + else + RFCNB_errno = RFCNBE_BadWrite; + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + + } + + if (len_sent < tot_sent) { /* Less than we wanted */ + if (errno == EINTR) /* We were interrupted */ + RFCNB_errno = RFCNBE_Timeout; + else + RFCNB_errno = RFCNBE_BadWrite; + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + } + + if (RFCNB_Timeout > 0) + alarm(0); /* Reset that sucker */ + + return(len_sent); + +} + +/* Read an RFCNB packet off the connection. + + We read the first 4 bytes, that tells us the length, then read the + rest. We should implement a timeout, but we don't just yet + +*/ + + +int RFCNB_Get_Pkt(struct RFCNB_Con *con, struct RFCNB_Pkt *pkt, int len) + +{ int read_len, pkt_len; + char hdr[RFCNB_Pkt_Hdr_Len]; /* Local space for the header */ + struct RFCNB_Pkt *pkt_frag; + int more, this_time, offset, frag_len, this_len; + bool seen_keep_alive = true; + + /* Read that header straight into the buffer */ + + if (len < RFCNB_Pkt_Hdr_Len) { /* What a bozo */ + + RFCNB_errno = RFCNBE_BadParam; + return(RFCNBE_Bad); + + } + + /* We discard keep alives here ... */ + + if (RFCNB_Timeout > 0) + alarm(RFCNB_Timeout); + + while (seen_keep_alive) { + + if ((read_len = read(con -> fd, hdr, sizeof(hdr))) < 0) { /* Problems */ + if (errno == EINTR) + RFCNB_errno = RFCNBE_Timeout; + else + RFCNB_errno = RFCNBE_BadRead; + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + + } + + /* Now we check out what we got */ + + if (read_len == 0) { /* Connection closed, send back eof? */ + + if (errno == EINTR) + RFCNB_errno = RFCNBE_Timeout; + else + RFCNB_errno = RFCNBE_ConGone; + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + + } + + if (RFCNB_Pkt_Type(hdr) != RFCNB_SESSION_KEEP_ALIVE) { + seen_keep_alive = false; + } + + } + + /* What if we got less than or equal to a hdr size in bytes? */ + + if (read_len < sizeof(hdr)) { /* We got a small packet */ + + /* Now we need to copy the hdr portion we got into the supplied packet */ + + memcpy(pkt -> data, hdr, read_len); /*Copy data */ + + return(read_len); + + } + + /* Now, if we got at least a hdr size, alloc space for rest, if we need it */ + + pkt_len = RFCNB_Pkt_Len(hdr); + + /* Now copy in the hdr */ + + memcpy(pkt -> data, hdr, sizeof(hdr)); + + /* Get the rest of the packet ... first figure out how big our buf is? */ + /* And make sure that we handle the fragments properly ... Sure should */ + /* use an iovec ... */ + + if (len < pkt_len) /* Only get as much as we have space for */ + more = len - RFCNB_Pkt_Hdr_Len; + else + more = pkt_len; + + this_time = 0; + + /* We read for each fragment ... */ + + if (pkt -> len == read_len){ /* If this frag was exact size */ + pkt_frag = pkt -> next; /* Stick next lot in next frag */ + offset = 0; /* then we start at 0 in next */ + } + else { + pkt_frag = pkt; /* Otherwise use rest of this frag */ + offset = RFCNB_Pkt_Hdr_Len; /* Otherwise skip the header */ + } + + frag_len = pkt_frag -> len; + + if (more <= frag_len) /* If len left to get less than frag space */ + this_len = more; /* Get the rest ... */ + else + this_len = frag_len - offset; + + while (more > 0) { + + if ((this_time = read(con -> fd, (pkt_frag -> data) + offset, this_len)) <= 0) { /* Problems */ + + if (errno == EINTR) { + + RFCNB_errno = RFCNB_Timeout; + + } + else { + if (this_time < 0) + RFCNB_errno = RFCNBE_BadRead; + else + RFCNB_errno = RFCNBE_ConGone; + } + + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + + } + + read_len = read_len + this_time; /* How much have we read ... */ + + /* Now set up the next part */ + + if (pkt_frag -> next == NULL) break; /* That's it here */ + + pkt_frag = pkt_frag -> next; + this_len = pkt_frag -> len; + offset = 0; + + more = more - this_time; + + } + + if (read_len < (pkt_len + sizeof(hdr))) { /* Discard the rest */ + + return(RFCNB_Discard_Rest(con, (pkt_len + sizeof(hdr)) - read_len)); + + } + + if (RFCNB_Timeout > 0) + alarm(0); /* Reset that sucker */ + + return(read_len + sizeof(RFCNB_Hdr)); +} diff --git a/authprogs/smbval/rfcnb-io.h b/authprogs/smbval/rfcnb-io.h new file mode 100644 index 0000000..753c7ae --- /dev/null +++ b/authprogs/smbval/rfcnb-io.h @@ -0,0 +1,28 @@ +/* UNIX RFCNB (RFC1001/RFC1002) NetBIOS implementation + + Version 1.0 + RFCNB IO Routines Defines + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +int RFCNB_Put_Pkt(struct RFCNB_Con *con, struct RFCNB_Pkt *pkt, int len); + +int RFCNB_Get_Pkt(struct RFCNB_Con *con, struct RFCNB_Pkt *pkt, int len); diff --git a/authprogs/smbval/rfcnb-priv.h b/authprogs/smbval/rfcnb-priv.h new file mode 100644 index 0000000..a6b9da8 --- /dev/null +++ b/authprogs/smbval/rfcnb-priv.h @@ -0,0 +1,115 @@ +/* UNIX RFCNB (RFC1001/RFC1002) NetBIOS implementation + + Version 1.0 + RFCNB Defines + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +/* Defines we need */ + +typedef unsigned short uint16; + +#define GLOBAL extern + +#include + +#include "rfcnb-error.h" +#include "rfcnb-common.h" +#include "byteorder.h" + +#define RFCNB_Default_Port 139 + +#define RFCNB_MAX_STATS 1 + +/* Protocol defines we need */ + +#define RFCNB_SESSION_MESSAGE 0 +#define RFCNB_SESSION_REQUEST 0x81 +#define RFCNB_SESSION_ACK 0x82 +#define RFCNB_SESSION_REJ 0x83 +#define RFCNB_SESSION_RETARGET 0x84 +#define RFCNB_SESSION_KEEP_ALIVE 0x85 + +/* Structures */ + +typedef struct redirect_addr * redirect_ptr; + +struct redirect_addr { + + struct in_addr ip_addr; + int port; + redirect_ptr next; + +}; + +typedef struct RFCNB_Con { + + int fd; /* File descripter for TCP/IP connection */ + int rfc_errno; /* last error */ + int timeout; /* How many milli-secs before IO times out */ + int redirects; /* How many times we were redirected */ + struct redirect_addr *redirect_list; /* First is first address */ + struct redirect_addr *last_addr; + +} RFCNB_Con; + +typedef char RFCNB_Hdr[4]; /* The header is 4 bytes long with */ + /* char[0] as the type, char[1] the */ + /* flags, and char[2..3] the length */ + +/* Macros to extract things from the header. These are for portability + between architecture types where we are worried about byte order */ + +#define RFCNB_Pkt_Hdr_Len 4 +#define RFCNB_Pkt_Sess_Len 72 +#define RFCNB_Pkt_Retarg_Len 10 +#define RFCNB_Pkt_Nack_Len 5 +#define RFCNB_Pkt_Type_Offset 0 +#define RFCNB_Pkt_Flags_Offset 1 +#define RFCNB_Pkt_Len_Offset 2 /* Length is 2 bytes plus a flag bit */ +#define RFCNB_Pkt_N1Len_Offset 4 +#define RFCNB_Pkt_Called_Offset 5 +#define RFCNB_Pkt_N2Len_Offset 38 +#define RFCNB_Pkt_Calling_Offset 39 +#define RFCNB_Pkt_Error_Offset 4 +#define RFCNB_Pkt_IP_Offset 4 +#define RFCNB_Pkt_Port_Offset 8 + +/* The next macro isolates the length of a packet, including the bit in the + flags */ + +#define RFCNB_Pkt_Len(p) (PVAL(p, 3) | (PVAL(p, 2) << 8) | \ + ((PVAL(p, RFCNB_Pkt_Flags_Offset) & 0x01) << 16)) + +#define RFCNB_Put_Pkt_Len(p, v) ((p)[1] = (((v) >> 16) & 1)); \ + ((p)[2] = (((v) >> 8) & 0xFF)); \ + ((p)[3] = ((v) & 0xFF)); + +#define RFCNB_Pkt_Type(p) (CVAL(p, RFCNB_Pkt_Type_Offset)) + +/* Static variables */ + +/* Only declare this if not defined */ + +#ifndef RFCNB_ERRNO +extern int RFCNB_errno; +extern int RFCNB_saved_errno; /* Save this from point of error */ +#endif diff --git a/authprogs/smbval/rfcnb-util.c b/authprogs/smbval/rfcnb-util.c new file mode 100644 index 0000000..9e57e4e --- /dev/null +++ b/authprogs/smbval/rfcnb-util.c @@ -0,0 +1,331 @@ +/* UNIX RFCNB (RFC1001/RFC1002) NetBIOS implementation + + Version 1.0 + RFCNB Utility Routines ... + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +#include "config.h" +#include "clibrary.h" +#include "portable/socket.h" +#include +#include + +#include "rfcnb-priv.h" +#include "rfcnb-util.h" +#include "rfcnb-io.h" + +#ifndef INADDR_NONE +# define INADDR_NONE -1 +#endif + +extern void (*Prot_Print_Routine)(); /* Pointer to protocol print routine */ + +/* Convert name and pad to 16 chars as needed */ +/* Name 1 is a C string with null termination, name 2 may not be */ +/* If SysName is true, then put a <00> on end, else space> */ + +void RFCNB_CvtPad_Name(char *name1, char *name2) + +{ char c, c1, c2; + int i, len; + + len = strlen(name1); + + for (i = 0; i < 16; i++) { + + if (i >= len) { + + c1 = 'C'; c2 = 'A'; /* CA is a space */ + + } else { + + c = name1[i]; + c1 = (char)((int)c/16 + (int)'A'); + c2 = (char)((int)c%16 + (int)'A'); + } + + name2[i*2] = c1; + name2[i*2+1] = c2; + + } + + name2[32] = 0; /* Put in the nll ...*/ + +} + +/* Get a packet of size n */ + +struct RFCNB_Pkt *RFCNB_Alloc_Pkt(int n) + +{ RFCNB_Pkt *pkt; + + if ((pkt = (struct RFCNB_Pkt *)malloc(sizeof(struct RFCNB_Pkt))) == NULL) { + + RFCNB_errno = RFCNBE_NoSpace; + RFCNB_saved_errno = errno; + return(NULL); + + } + + pkt -> next = NULL; + pkt -> len = n; + + if (n == 0) return(pkt); + + if ((pkt -> data = (char *)malloc(n)) == NULL) { + + RFCNB_errno = RFCNBE_NoSpace; + RFCNB_saved_errno = errno; + free(pkt); + return(NULL); + + } + + return(pkt); + +} + +/* Free up a packet */ + +void RFCNB_Free_Pkt(struct RFCNB_Pkt *pkt) + +{ struct RFCNB_Pkt *pkt_next; char *data_ptr; + + while (pkt != NULL) { + + pkt_next = pkt -> next; + + data_ptr = pkt -> data; + + if (data_ptr != NULL) + free(data_ptr); + + free(pkt); + + pkt = pkt_next; + + } + +} + +/* Resolve a name into an address */ + +int RFCNB_Name_To_IP(char *host, struct in_addr *Dest_IP) + +{ int addr; /* Assumes IP4, 32 bit network addresses */ + struct hostent *hp; + + /* Use inet_addr to try to convert the address */ + + if ((addr = inet_addr(host)) == INADDR_NONE) { /* Oh well, a good try :-) */ + + /* Now try a name look up with gethostbyname */ + + if ((hp = gethostbyname(host)) == NULL) { /* Not in DNS */ + + /* Try NetBIOS name lookup, how the hell do we do that? */ + + RFCNB_errno = RFCNBE_BadName; /* Is this right? */ + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + + } + else { /* We got a name */ + + memcpy((void *)Dest_IP, (void *)hp -> h_addr_list[0], sizeof(struct in_addr)); + + } + } + else { /* It was an IP address */ + + memcpy((void *)Dest_IP, (void *)&addr, sizeof(struct in_addr)); + + } + + return 0; + +} + +/* Disconnect the TCP connection to the server */ + +int RFCNB_Close(int socket) + +{ + + close(socket); + + /* If we want to do error recovery, here is where we put it */ + + return 0; + +} + +/* Connect to the server specified in the IP address. + Not sure how to handle socket options etc. */ + +int RFCNB_IP_Connect(struct in_addr Dest_IP, int port) + +{ struct sockaddr_in Socket; + int fd; + + /* Create a socket */ + + if ((fd = socket(PF_INET, SOCK_STREAM, 0)) < 0) { /* Handle the error */ + + RFCNB_errno = RFCNBE_BadSocket; + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + } + + memset(&Socket, 0, sizeof(Socket)); + memcpy((char *)&Socket.sin_addr, (char *)&Dest_IP, sizeof(Dest_IP)); + + Socket.sin_port = htons(port); + Socket.sin_family = PF_INET; + + /* Now connect to the destination */ + + if (connect(fd, (struct sockaddr *)&Socket, sizeof(Socket)) < 0) { /* Error */ + + close(fd); + RFCNB_errno = RFCNBE_ConnectFailed; + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + } + + return(fd); + +} + +/* handle the details of establishing the RFCNB session with remote + end + +*/ + +int RFCNB_Session_Req(struct RFCNB_Con *con, + char *Called_Name, + char *Calling_Name, + bool *redirect, + struct in_addr *Dest_IP, + int * port) + +{ char *sess_pkt; + + /* Response packet should be no more than 9 bytes, make 16 jic */ + + char resp[16]; + int len; + struct RFCNB_Pkt *pkt, res_pkt; + + /* We build and send the session request, then read the response */ + + pkt = RFCNB_Alloc_Pkt(RFCNB_Pkt_Sess_Len); + + if (pkt == NULL) { + + return(RFCNBE_Bad); /* Leave the error that RFCNB_Alloc_Pkt gives) */ + + } + + sess_pkt = pkt -> data; /* Get pointer to packet proper */ + + sess_pkt[RFCNB_Pkt_Type_Offset] = RFCNB_SESSION_REQUEST; + RFCNB_Put_Pkt_Len(sess_pkt, RFCNB_Pkt_Sess_Len-RFCNB_Pkt_Hdr_Len); + sess_pkt[RFCNB_Pkt_N1Len_Offset] = 32; + sess_pkt[RFCNB_Pkt_N2Len_Offset] = 32; + + RFCNB_CvtPad_Name(Called_Name, (sess_pkt + RFCNB_Pkt_Called_Offset)); + RFCNB_CvtPad_Name(Calling_Name, (sess_pkt + RFCNB_Pkt_Calling_Offset)); + + /* Now send the packet */ + + if ((len = RFCNB_Put_Pkt(con, pkt, RFCNB_Pkt_Sess_Len)) < 0) { + + return(RFCNBE_Bad); /* Should be able to write that lot ... */ + + } + + res_pkt.data = resp; + res_pkt.len = sizeof(resp); + res_pkt.next = NULL; + + if ((len = RFCNB_Get_Pkt(con, &res_pkt, sizeof(resp))) < 0) { + + return(RFCNBE_Bad); + + } + + /* Now analyze the packet ... */ + + switch (RFCNB_Pkt_Type(resp)) { + + case RFCNB_SESSION_REJ: /* Didnt like us ... too bad */ + + /* Why did we get rejected ? */ + + switch (CVAL(resp,RFCNB_Pkt_Error_Offset)) { + + case 0x80: + RFCNB_errno = RFCNBE_CallRejNLOCN; + break; + case 0x81: + RFCNB_errno = RFCNBE_CallRejNLFCN; + break; + case 0x82: + RFCNB_errno = RFCNBE_CallRejCNNP; + break; + case 0x83: + RFCNB_errno = RFCNBE_CallRejInfRes; + break; + case 0x8F: + RFCNB_errno = RFCNBE_CallRejUnSpec; + break; + default: + RFCNB_errno = RFCNBE_ProtErr; + break; + } + + return(RFCNBE_Bad); + break; + + case RFCNB_SESSION_ACK: /* Got what we wanted ... */ + + return(0); + break; + + case RFCNB_SESSION_RETARGET: /* Go elsewhere */ + + *redirect = true; /* Copy port and ip addr */ + + memcpy(Dest_IP, (resp + RFCNB_Pkt_IP_Offset), sizeof(struct in_addr)); + *port = SVAL(resp, RFCNB_Pkt_Port_Offset); + + return(0); + break; + + default: /* A protocol error */ + + RFCNB_errno = RFCNBE_ProtErr; + return(RFCNBE_Bad); + break; + } +} diff --git a/authprogs/smbval/rfcnb-util.h b/authprogs/smbval/rfcnb-util.h new file mode 100644 index 0000000..1af7e5e --- /dev/null +++ b/authprogs/smbval/rfcnb-util.h @@ -0,0 +1,42 @@ +/* UNIX RFCNB (RFC1001/RFC1002) NetBIOS implementation + + Version 1.0 + RFCNB Utility Defines + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +void RFCNB_CvtPad_Name(char *name1, char *name2); + +struct RFCNB_Pkt *RFCNB_Alloc_Pkt(int n); + +int RFCNB_Name_To_IP(char *host, struct in_addr *Dest_IP); + +int RFCNB_Close(int socket); + +int RFCNB_IP_Connect(struct in_addr Dest_IP, int port); + +int RFCNB_Session_Req(struct RFCNB_Con *con, + char *Called_Name, + char *Calling_Name, + bool *redirect, + struct in_addr *Dest_IP, + int * port); + diff --git a/authprogs/smbval/rfcnb.h b/authprogs/smbval/rfcnb.h new file mode 100644 index 0000000..8c2ea1c --- /dev/null +++ b/authprogs/smbval/rfcnb.h @@ -0,0 +1,48 @@ +/* UNIX RFCNB (RFC1001/RFC1002) NetBIOS implementation + + Version 1.0 + RFCNB Defines + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +/* Error responses */ + +#include "rfcnb-error.h" +#include "rfcnb-common.h" + +/* Defines we need */ + +#define RFCNB_Default_Port 139 + +/* Definition of routines we define */ + +void *RFCNB_Call(char *Called_Name, char *Calling_Name, char *Called_Address, + int port); + +int RFCNB_Send(void *Con_Handle, struct RFCNB_Pkt *Data, int Length); + +int RFCNB_Recv(void *Con_Handle, struct RFCNB_Pkt *Data, int Length); + +int RFCNB_Hangup(void *con_Handle); + +void *RFCNB_Listen(); + +struct RFCNB_Pkt *RFCNB_Alloc_Pkt(int n); diff --git a/authprogs/smbval/session.c b/authprogs/smbval/session.c new file mode 100644 index 0000000..ec35bcd --- /dev/null +++ b/authprogs/smbval/session.c @@ -0,0 +1,304 @@ +/* UNIX RFCNB (RFC1001/RFC1002) NetBIOS implementation + + Version 1.0 + Session Routines ... + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include +#include +#include + +int RFCNB_errno = 0; +int RFCNB_saved_errno = 0; +#define RFCNB_ERRNO + +#include "rfcnb-priv.h" +#include "rfcnb-io.h" +#include "rfcnb-util.h" + +int RFCNB_Stats[RFCNB_MAX_STATS]; + +void (*Prot_Print_Routine)() = NULL; /* Pointer to print routine */ + +/* Set up a session with a remote name. We are passed Called_Name as a + string which we convert to a NetBIOS name, ie space terminated, up to + 16 characters only if we need to. If Called_Address is not empty, then + we use it to connect to the remote end, but put in Called_Name ... Called + Address can be a DNS based name, or a TCP/IP address ... +*/ + +void *RFCNB_Call(char *Called_Name, char *Calling_Name, char *Called_Address, + int port) + +{ struct RFCNB_Con *con; + struct in_addr Dest_IP; + int Client; + bool redirect; struct redirect_addr *redir_addr; + char *Service_Address; + + /* Now, we really should look up the port in /etc/services ... */ + + if (port == 0) port = RFCNB_Default_Port; + + /* Create a connection structure first */ + + if ((con = (struct RFCNB_Con *)malloc(sizeof(struct RFCNB_Con))) == NULL) { /* Error in size */ + + RFCNB_errno = RFCNBE_NoSpace; + RFCNB_saved_errno = errno; + return(NULL); + + } + + con -> fd = -0; /* no descriptor yet */ + con -> rfc_errno = 0; /* no error yet */ + con -> timeout = 0; /* no timeout */ + con -> redirects = 0; + con -> redirect_list = NULL; /* Fix bug still in version 0.50 */ + + /* Resolve that name into an IP address */ + + Service_Address = Called_Name; + if (strcmp(Called_Address, "") != 0) { /* If the Called Address = "" */ + Service_Address = Called_Address; + } + + if ((errno = RFCNB_Name_To_IP(Service_Address, &Dest_IP)) < 0) { /* Error */ + + /* No need to modify RFCNB_errno as it was done by RFCNB_Name_To_IP */ + + return(NULL); + + } + + /* Now connect to the remote end */ + + redirect = true; /* Fudge this one so we go once through */ + + while (redirect) { /* Connect and get session info etc */ + + redirect = false; /* Assume all OK */ + + /* Build the redirect info. First one is first addr called */ + /* And tack it onto the list of addresses we called */ + + if ((redir_addr = (struct redirect_addr *)malloc(sizeof(struct redirect_addr))) == NULL) { /* Could not get space */ + + RFCNB_errno = RFCNBE_NoSpace; + RFCNB_saved_errno = errno; + return(NULL); + + } + + memcpy((char *)&(redir_addr -> ip_addr), (char *)&Dest_IP, sizeof(Dest_IP)); + redir_addr -> port = port; + redir_addr -> next = NULL; + + if (con -> redirect_list == NULL) { /* Stick on head */ + + con -> redirect_list = con -> last_addr = redir_addr; + + } else { + + con -> last_addr -> next = redir_addr; + con -> last_addr = redir_addr; + + } + + /* Now, make that connection */ + + if ((Client = RFCNB_IP_Connect(Dest_IP, port)) < 0) { /* Error */ + + /* No need to modify RFCNB_errno as it was done by RFCNB_IP_Connect */ + + return(NULL); + + } + + con -> fd = Client; + + /* Now send and handle the RFCNB session request */ + /* If we get a redirect, we will comeback with redirect true + and a new IP address in DEST_IP */ + + if ((errno = RFCNB_Session_Req(con, + Called_Name, + Calling_Name, + &redirect, &Dest_IP, &port)) < 0) { + + /* No need to modify RFCNB_errno as it was done by RFCNB_Session.. */ + + return(NULL); + + } + + if (redirect) { + + /* We have to close the connection, and then try again */ + + (con -> redirects)++; + + RFCNB_Close(con -> fd); /* Close it */ + + } + } + + return(con); + +} + +/* We send a packet to the other end ... for the moment, we treat the + data as a series of pointers to blocks of data ... we should check the + length ... */ + +int RFCNB_Send(struct RFCNB_Con *Con_Handle, struct RFCNB_Pkt *udata, int Length) + +{ struct RFCNB_Pkt *pkt; char *hdr; + int len; + + /* Plug in the header and send the data */ + + pkt = RFCNB_Alloc_Pkt(RFCNB_Pkt_Hdr_Len); + + if (pkt == NULL) { + + RFCNB_errno = RFCNBE_NoSpace; + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + + } + + pkt -> next = udata; /* The user data we want to send */ + + hdr = pkt -> data; + + /* Following crap is for portability across multiple UNIX machines */ + + *(hdr + RFCNB_Pkt_Type_Offset) = RFCNB_SESSION_MESSAGE; + RFCNB_Put_Pkt_Len(hdr, Length); + +#ifdef RFCNB_DEBUG + + fprintf(stderr, "Sending packet: "); + +#endif + + if ((len = RFCNB_Put_Pkt(Con_Handle, pkt, Length + RFCNB_Pkt_Hdr_Len)) < 0) { + + /* No need to change RFCNB_errno as it was done by put_pkt ... */ + + return(RFCNBE_Bad); /* Should be able to write that lot ... */ + + } + + /* Now we have sent that lot, let's get rid of the RFCNB Header and return */ + + pkt -> next = NULL; + + RFCNB_Free_Pkt(pkt); + + return(len); + +} + +/* We pick up a message from the internet ... We have to worry about + non-message packets ... */ + +int RFCNB_Recv(void *con_Handle, struct RFCNB_Pkt *Data, int Length) + +{ struct RFCNB_Pkt *pkt; + int ret_len; + + if (con_Handle == NULL){ + + RFCNB_errno = RFCNBE_BadHandle; + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + + } + + /* Now get a packet from below. We allocate a header first */ + + /* Plug in the header and send the data */ + + pkt = RFCNB_Alloc_Pkt(RFCNB_Pkt_Hdr_Len); + + if (pkt == NULL) { + + RFCNB_errno = RFCNBE_NoSpace; + RFCNB_saved_errno = errno; + return(RFCNBE_Bad); + + } + + pkt -> next = Data; /* Plug in the data portion */ + + if ((ret_len = RFCNB_Get_Pkt(con_Handle, pkt, Length + RFCNB_Pkt_Hdr_Len)) < 0) { + +#ifdef RFCNB_DEBUG + fprintf(stderr, "Bad packet return in RFCNB_Recv... \n"); +#endif + + return(RFCNBE_Bad); + + } + + /* We should check that we go a message and not a keep alive */ + + pkt -> next = NULL; + + RFCNB_Free_Pkt(pkt); + + return(ret_len); + +} + +/* We just disconnect from the other end, as there is nothing in the RFCNB */ +/* protocol that specifies any exchange as far as I can see */ + +int RFCNB_Hangup(struct RFCNB_Con *con_Handle) + +{ + + if (con_Handle != NULL) { + RFCNB_Close(con_Handle -> fd); /* Could this fail? */ + free(con_Handle); + } + + return 0; + + +} + +/* Set TCP_NODELAY on the socket */ + +int RFCNB_Set_Sock_NoDelay(struct RFCNB_Con *con_Handle, bool yn) + +{ + + return(setsockopt(con_Handle -> fd, IPPROTO_TCP, TCP_NODELAY, + (char *)&yn, sizeof(yn))); + +} diff --git a/authprogs/smbval/smbdes.c b/authprogs/smbval/smbdes.c new file mode 100644 index 0000000..e4f8280 --- /dev/null +++ b/authprogs/smbval/smbdes.c @@ -0,0 +1,337 @@ +/* + Unix SMB/Netbios implementation. + Version 1.9. + + a partial implementation of DES designed for use in the + SMB authentication protocol + + Copyright (C) Andrew Tridgell 1997 + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + + +/* NOTES: + + This code makes no attempt to be fast! In fact, it is a very + slow implementation + + This code is NOT a complete DES implementation. It implements only + the minimum necessary for SMB authentication, as used by all SMB + products (including every copy of Microsoft Windows95 ever sold) + + In particular, it can only do a unchained forward DES pass. This + means it is not possible to use this code for encryption/decryption + of data, instead it is only useful as a "hash" algorithm. + + There is no entry point into this code that allows normal DES operation. + + I believe this means that this code does not come under ITAR + regulations but this is NOT a legal opinion. If you are concerned + about the applicability of ITAR regulations to this code then you + should confirm it for yourself (and maybe let me know if you come + up with a different answer to the one above) +*/ + + + +static int perm1[56] = {57, 49, 41, 33, 25, 17, 9, + 1, 58, 50, 42, 34, 26, 18, + 10, 2, 59, 51, 43, 35, 27, + 19, 11, 3, 60, 52, 44, 36, + 63, 55, 47, 39, 31, 23, 15, + 7, 62, 54, 46, 38, 30, 22, + 14, 6, 61, 53, 45, 37, 29, + 21, 13, 5, 28, 20, 12, 4}; + +static int perm2[48] = {14, 17, 11, 24, 1, 5, + 3, 28, 15, 6, 21, 10, + 23, 19, 12, 4, 26, 8, + 16, 7, 27, 20, 13, 2, + 41, 52, 31, 37, 47, 55, + 30, 40, 51, 45, 33, 48, + 44, 49, 39, 56, 34, 53, + 46, 42, 50, 36, 29, 32}; + +static int perm3[64] = {58, 50, 42, 34, 26, 18, 10, 2, + 60, 52, 44, 36, 28, 20, 12, 4, + 62, 54, 46, 38, 30, 22, 14, 6, + 64, 56, 48, 40, 32, 24, 16, 8, + 57, 49, 41, 33, 25, 17, 9, 1, + 59, 51, 43, 35, 27, 19, 11, 3, + 61, 53, 45, 37, 29, 21, 13, 5, + 63, 55, 47, 39, 31, 23, 15, 7}; + +static int perm4[48] = { 32, 1, 2, 3, 4, 5, + 4, 5, 6, 7, 8, 9, + 8, 9, 10, 11, 12, 13, + 12, 13, 14, 15, 16, 17, + 16, 17, 18, 19, 20, 21, + 20, 21, 22, 23, 24, 25, + 24, 25, 26, 27, 28, 29, + 28, 29, 30, 31, 32, 1}; + +static int perm5[32] = { 16, 7, 20, 21, + 29, 12, 28, 17, + 1, 15, 23, 26, + 5, 18, 31, 10, + 2, 8, 24, 14, + 32, 27, 3, 9, + 19, 13, 30, 6, + 22, 11, 4, 25}; + + +static int perm6[64] ={ 40, 8, 48, 16, 56, 24, 64, 32, + 39, 7, 47, 15, 55, 23, 63, 31, + 38, 6, 46, 14, 54, 22, 62, 30, + 37, 5, 45, 13, 53, 21, 61, 29, + 36, 4, 44, 12, 52, 20, 60, 28, + 35, 3, 43, 11, 51, 19, 59, 27, + 34, 2, 42, 10, 50, 18, 58, 26, + 33, 1, 41, 9, 49, 17, 57, 25}; + + +static int sc[16] = {1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1}; + +static int sbox[8][4][16] = { + {{14, 4, 13, 1, 2, 15, 11, 8, 3, 10, 6, 12, 5, 9, 0, 7}, + {0, 15, 7, 4, 14, 2, 13, 1, 10, 6, 12, 11, 9, 5, 3, 8}, + {4, 1, 14, 8, 13, 6, 2, 11, 15, 12, 9, 7, 3, 10, 5, 0}, + {15, 12, 8, 2, 4, 9, 1, 7, 5, 11, 3, 14, 10, 0, 6, 13}}, + + {{15, 1, 8, 14, 6, 11, 3, 4, 9, 7, 2, 13, 12, 0, 5, 10}, + {3, 13, 4, 7, 15, 2, 8, 14, 12, 0, 1, 10, 6, 9, 11, 5}, + {0, 14, 7, 11, 10, 4, 13, 1, 5, 8, 12, 6, 9, 3, 2, 15}, + {13, 8, 10, 1, 3, 15, 4, 2, 11, 6, 7, 12, 0, 5, 14, 9}}, + + {{10, 0, 9, 14, 6, 3, 15, 5, 1, 13, 12, 7, 11, 4, 2, 8}, + {13, 7, 0, 9, 3, 4, 6, 10, 2, 8, 5, 14, 12, 11, 15, 1}, + {13, 6, 4, 9, 8, 15, 3, 0, 11, 1, 2, 12, 5, 10, 14, 7}, + {1, 10, 13, 0, 6, 9, 8, 7, 4, 15, 14, 3, 11, 5, 2, 12}}, + + {{7, 13, 14, 3, 0, 6, 9, 10, 1, 2, 8, 5, 11, 12, 4, 15}, + {13, 8, 11, 5, 6, 15, 0, 3, 4, 7, 2, 12, 1, 10, 14, 9}, + {10, 6, 9, 0, 12, 11, 7, 13, 15, 1, 3, 14, 5, 2, 8, 4}, + {3, 15, 0, 6, 10, 1, 13, 8, 9, 4, 5, 11, 12, 7, 2, 14}}, + + {{2, 12, 4, 1, 7, 10, 11, 6, 8, 5, 3, 15, 13, 0, 14, 9}, + {14, 11, 2, 12, 4, 7, 13, 1, 5, 0, 15, 10, 3, 9, 8, 6}, + {4, 2, 1, 11, 10, 13, 7, 8, 15, 9, 12, 5, 6, 3, 0, 14}, + {11, 8, 12, 7, 1, 14, 2, 13, 6, 15, 0, 9, 10, 4, 5, 3}}, + + {{12, 1, 10, 15, 9, 2, 6, 8, 0, 13, 3, 4, 14, 7, 5, 11}, + {10, 15, 4, 2, 7, 12, 9, 5, 6, 1, 13, 14, 0, 11, 3, 8}, + {9, 14, 15, 5, 2, 8, 12, 3, 7, 0, 4, 10, 1, 13, 11, 6}, + {4, 3, 2, 12, 9, 5, 15, 10, 11, 14, 1, 7, 6, 0, 8, 13}}, + + {{4, 11, 2, 14, 15, 0, 8, 13, 3, 12, 9, 7, 5, 10, 6, 1}, + {13, 0, 11, 7, 4, 9, 1, 10, 14, 3, 5, 12, 2, 15, 8, 6}, + {1, 4, 11, 13, 12, 3, 7, 14, 10, 15, 6, 8, 0, 5, 9, 2}, + {6, 11, 13, 8, 1, 4, 10, 7, 9, 5, 0, 15, 14, 2, 3, 12}}, + + {{13, 2, 8, 4, 6, 15, 11, 1, 10, 9, 3, 14, 5, 0, 12, 7}, + {1, 15, 13, 8, 10, 3, 7, 4, 12, 5, 6, 11, 0, 14, 9, 2}, + {7, 11, 4, 1, 9, 12, 14, 2, 0, 6, 10, 13, 15, 3, 5, 8}, + {2, 1, 14, 7, 4, 10, 8, 13, 15, 12, 9, 0, 3, 5, 6, 11}}}; + +static void permute(char *out, char *in, int *p, int n) +{ + int i; + for (i=0;i>1; + key[1] = ((str[0]&0x01)<<6) | (str[1]>>2); + key[2] = ((str[1]&0x03)<<5) | (str[2]>>3); + key[3] = ((str[2]&0x07)<<4) | (str[3]>>4); + key[4] = ((str[3]&0x0F)<<3) | (str[4]>>5); + key[5] = ((str[4]&0x1F)<<2) | (str[5]>>6); + key[6] = ((str[5]&0x3F)<<1) | (str[6]>>7); + key[7] = str[6]&0x7F; + for (i=0;i<8;i++) { + key[i] = (key[i]<<1); + } +} + + +static void smbhash(unsigned char *out, unsigned char *in, unsigned char *key) +{ + int i; + char outb[64]; + char inb[64]; + char keyb[64]; + unsigned char key2[8]; + + str_to_key(key, key2); + + for (i=0;i<64;i++) { + inb[i] = (in[i/8] & (1<<(7-(i%8)))) ? 1 : 0; + keyb[i] = (key2[i/8] & (1<<(7-(i%8)))) ? 1 : 0; + outb[i] = 0; + } + + dohash(outb, inb, keyb); + + for (i=0;i<8;i++) { + out[i] = 0; + } + + for (i=0;i<64;i++) { + if (outb[i]) + out[i/8] |= (1<<(7-(i%8))); + } +} + +void E_P16(unsigned char *p14,unsigned char *p16) +{ + unsigned char sp8[8] = {0x4b, 0x47, 0x53, 0x21, 0x40, 0x23, 0x24, 0x25}; + smbhash(p16, sp8, p14); + smbhash(p16+8, sp8, p14+7); +} + +void E_P24(unsigned char *p21, unsigned char *c8, unsigned char *p24) +{ + smbhash(p24, c8, p21); + smbhash(p24+8, c8, p21+7); + smbhash(p24+16, c8, p21+14); +} + +void cred_hash1(unsigned char *out,unsigned char *in,unsigned char *key) +{ + unsigned char buf[8]; + + smbhash(buf, in, key); + smbhash(out, buf, key+9); +} + +void cred_hash2(unsigned char *out,unsigned char *in,unsigned char *key) +{ + unsigned char buf[8]; + static unsigned char key2[8]; + + smbhash(buf, in, key); + key2[0] = key[7]; + smbhash(out, buf, key2); +} + diff --git a/authprogs/smbval/smbencrypt.c b/authprogs/smbval/smbencrypt.c new file mode 100644 index 0000000..4cae973 --- /dev/null +++ b/authprogs/smbval/smbencrypt.c @@ -0,0 +1,60 @@ +/* + Unix SMB/Netbios implementation. + Version 1.9. + SMB parameters and setup + Copyright (C) Andrew Tridgell 1992-1997 + Modified by Jeremy Allison 1995. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +#include "config.h" +#include "clibrary.h" +#include + +#include "smblib-priv.h" + +typedef unsigned char uchar; + +void strupper(char *s); + +/* + This implements the X/Open SMB password encryption + It takes a password, a 8 byte "crypt key" and puts 24 bytes of + encrypted password into p24 */ +void SMBencrypt(uchar *passwd, uchar *c8, uchar *p24) +{ + uchar p14[15], p21[21]; + + memset(p21,'\0',21); + memset(p14,'\0',14); + strlcpy((char *) p14, (char *) passwd, sizeof(p14)); + + strupper((char *)p14); + E_P16(p14, p21); + E_P24(p21, c8, p24); +} + +void strupper(char *s) +{ + while (*s) + { + { + if (CTYPE(islower, *s)) + *s = toupper(*s); + s++; + } + } +} diff --git a/authprogs/smbval/smblib-common.h b/authprogs/smbval/smblib-common.h new file mode 100644 index 0000000..b8441e0 --- /dev/null +++ b/authprogs/smbval/smblib-common.h @@ -0,0 +1,69 @@ +/* UNIX SMBlib NetBIOS implementation + + Version 1.0 + SMBlib Common Defines + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +/* Error CLASS codes and etc ... */ + +#define SMBC_SUCCESS 0 +#define SMBC_ERRDOS 0x01 +#define SMBC_ERRSRV 0x02 +#define SMBC_ERRHRD 0x03 +#define SMBC_ERRCMD 0xFF + +/* Define the protocol types ... */ + +#define SMB_P_Unknown -1 /* Hmmm, is this smart? */ +#define SMB_P_Core 0 +#define SMB_P_CorePlus 1 +#define SMB_P_DOSLanMan1 2 +#define SMB_P_LanMan1 3 +#define SMB_P_DOSLanMan2 4 +#define SMB_P_LanMan2 5 +#define SMB_P_DOSLanMan2_1 6 +#define SMB_P_LanMan2_1 7 +#define SMB_P_NT1 8 + +/* SMBlib return codes */ +/* We want something that indicates whether or not the return code was a */ +/* remote error, a local error in SMBlib or returned from lower layer ... */ +/* Wonder if this will work ... */ +/* SMBlibE_Remote = 1 indicates remote error */ +/* SMBlibE_ values < 0 indicate local error with more info available */ +/* SMBlibE_ values >1 indicate local from SMBlib code errors? */ + +#define SMBlibE_Success 0 +#define SMBlibE_Remote 1 /* Remote error, get more info from con */ +#define SMBlibE_BAD -1 +#define SMBlibE_LowerLayer 2 /* Lower layer error */ +#define SMBlibE_NotImpl 3 /* Function not yet implemented */ +#define SMBlibE_ProtLow 4 /* Protocol negotiated does not support req */ +#define SMBlibE_NoSpace 5 /* No space to allocate a structure */ +#define SMBlibE_BadParam 6 /* Bad parameters */ +#define SMBlibE_NegNoProt 7 /* None of our protocols was liked */ +#define SMBlibE_SendFailed 8 /* Sending an SMB failed */ +#define SMBlibE_RecvFailed 9 /* Receiving an SMB failed */ +#define SMBlibE_GuestOnly 10 /* Logged in as guest */ +#define SMBlibE_CallFailed 11 /* Call remote end failed */ +#define SMBlibE_ProtUnknown 12 /* Protocol unknown */ +#define SMBlibE_NoSuchMsg 13 /* Keep this up to date */ diff --git a/authprogs/smbval/smblib-priv.h b/authprogs/smbval/smblib-priv.h new file mode 100644 index 0000000..155b66b --- /dev/null +++ b/authprogs/smbval/smblib-priv.h @@ -0,0 +1,249 @@ +/* UNIX SMBlib NetBIOS implementation + + Version 1.0 + SMBlib private Defines + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +#include "smblib-common.h" +#include +#include + +typedef unsigned short uint16; +typedef unsigned int uint32; + +#include "byteorder.h" /* Hmmm ... hot good */ + +#define SMB_DEF_IDF 0x424D53FF /* "\377SMB" */ + +/* The protocol commands and constants we need */ +#define SMBnegprot 0x72 /* negotiate protocol */ +#define SMBsesssetupX 0x73 /* Session Set Up & X (including User Logon) */ +#define SMBdialectID 0x02 /* a dialect id */ + +typedef unsigned short WORD; +typedef unsigned short UWORD; +typedef unsigned int ULONG; +typedef unsigned char BYTE; +typedef unsigned char UCHAR; + +/* Some macros to allow access to actual packet data so that we */ +/* can change the underlying representation of packets. */ +/* */ +/* The current formats vying for attention are a fragment */ +/* approach where the SMB header is a fragment linked to the */ +/* data portion with the transport protocol (rfcnb or whatever) */ +/* being linked on the front. */ +/* */ +/* The other approach is where the whole packet is one array */ +/* of bytes with space allowed on the front for the packet */ +/* headers. */ + +#define SMB_Hdr(p) (char *)(p -> data) + +/* SMB Hdr def for File Sharing Protocol? From MS and Intel, */ +/* Intel PN 138446 Doc Version 2.0, Nov 7, 1988. This def also */ +/* applies to LANMAN1.0 as well as the Core Protocol */ +/* The spec states that wct and bcc must be present, even if 0 */ + +/* We define these as offsets into a char SMB[] array for the */ +/* sake of portability */ + +/* NOTE!. Some of the lenght defines, SMB__len do not include */ +/* the data that follows in the SMB packet, so the code will have to */ +/* take that into account. */ + +#define SMB_hdr_idf_offset 0 /* 0xFF,'SMB' 0-3 */ +#define SMB_hdr_com_offset 4 /* BYTE 4 */ +#define SMB_hdr_rcls_offset 5 /* BYTE 5 */ +#define SMB_hdr_reh_offset 6 /* BYTE 6 */ +#define SMB_hdr_err_offset 7 /* WORD 7 */ +#define SMB_hdr_reb_offset 9 /* BYTE 9 */ +#define SMB_hdr_flg_offset 9 /* same as reb ...*/ +#define SMB_hdr_res_offset 10 /* 7 WORDs 10 */ +#define SMB_hdr_res0_offset 10 /* WORD 10 */ +#define SMB_hdr_flg2_offset 10 /* WORD */ +#define SMB_hdr_res1_offset 12 /* WORD 12 */ +#define SMB_hdr_res2_offset 14 +#define SMB_hdr_res3_offset 16 +#define SMB_hdr_res4_offset 18 +#define SMB_hdr_res5_offset 20 +#define SMB_hdr_res6_offset 22 +#define SMB_hdr_tid_offset 24 +#define SMB_hdr_pid_offset 26 +#define SMB_hdr_uid_offset 28 +#define SMB_hdr_mid_offset 30 +#define SMB_hdr_wct_offset 32 + +#define SMB_hdr_len 33 /* 33 byte header? */ + +#define SMB_hdr_axc_offset 33 /* AndX Command */ +#define SMB_hdr_axr_offset 34 /* AndX Reserved */ +#define SMB_hdr_axo_offset 35 /* Offset from start to WCT of AndX cmd */ + +/* Format of the Negotiate Protocol SMB */ + +#define SMB_negp_bcc_offset 33 +#define SMB_negp_buf_offset 35 /* Where the buffer starts */ +#define SMB_negp_len 35 /* plus the data */ + +/* Format of the Negotiate Response SMB, for CoreProtocol, LM1.2 and */ +/* NT LM 0.12. wct will be 1 for CoreProtocol, 13 for LM 1.2, and 17 */ +/* for NT LM 0.12 */ + +#define SMB_negrCP_idx_offset 33 /* Response to the neg req */ +#define SMB_negrCP_bcc_offset 35 +#define SMB_negrLM_idx_offset 33 /* dialect index */ +#define SMB_negrLM_sec_offset 35 /* Security mode */ +#define SMB_sec_user_mask 0x01 /* 0 = share, 1 = user */ +#define SMB_sec_encrypt_mask 0x02 /* pick out encrypt */ +#define SMB_negrLM_mbs_offset 37 /* max buffer size */ +#define SMB_negrLM_mmc_offset 39 /* max mpx count */ +#define SMB_negrLM_mnv_offset 41 /* max number of VCs */ +#define SMB_negrLM_rm_offset 43 /* raw mode support bit vec*/ +#define SMB_negrLM_sk_offset 45 /* session key, 32 bits */ +#define SMB_negrLM_st_offset 49 /* Current server time */ +#define SMB_negrLM_sd_offset 51 /* Current server date */ +#define SMB_negrLM_stz_offset 53 /* Server Time Zone */ +#define SMB_negrLM_ekl_offset 55 /* encryption key length */ +#define SMB_negrLM_res_offset 57 /* reserved */ +#define SMB_negrLM_bcc_offset 59 /* bcc */ +#define SMB_negrLM_len 61 /* 61 bytes ? */ +#define SMB_negrLM_buf_offset 61 /* Where the fun begins */ + +#define SMB_negrNTLM_idx_offset 33 /* Selected protocol */ +#define SMB_negrNTLM_sec_offset 35 /* Security more */ +#define SMB_negrNTLM_mmc_offset 36 /* Different format above */ +#define SMB_negrNTLM_mnv_offset 38 /* Max VCs */ +#define SMB_negrNTLM_mbs_offset 40 /* MBS now a long */ +#define SMB_negrNTLM_mrs_offset 44 /* Max raw size */ +#define SMB_negrNTLM_sk_offset 48 /* Session Key */ +#define SMB_negrNTLM_cap_offset 52 /* Capabilities */ +#define SMB_negrNTLM_stl_offset 56 /* Server time low */ +#define SMB_negrNTLM_sth_offset 60 /* Server time high */ +#define SMB_negrNTLM_stz_offset 64 /* Server time zone */ +#define SMB_negrNTLM_ekl_offset 66 /* Encrypt key len */ +#define SMB_negrNTLM_bcc_offset 67 /* Bcc */ +#define SMB_negrNTLM_len 69 +#define SMB_negrNTLM_buf_offset 69 + +/* Offsets for Delete file */ + +#define SMB_delet_sat_offset 33 /* search attribites */ +#define SMB_delet_bcc_offset 35 /* bcc */ +#define SMB_delet_buf_offset 37 +#define SMB_delet_len 37 + +/* Offsets for SESSION_SETUP_ANDX for both LM and NT LM protocols */ + +#define SMB_ssetpLM_mbs_offset 37 /* Max buffer Size, allow for AndX */ +#define SMB_ssetpLM_mmc_offset 39 /* max multiplex count */ +#define SMB_ssetpLM_vcn_offset 41 /* VC number if new VC */ +#define SMB_ssetpLM_snk_offset 43 /* Session Key */ +#define SMB_ssetpLM_pwl_offset 47 /* password length */ +#define SMB_ssetpLM_res_offset 49 /* reserved */ +#define SMB_ssetpLM_bcc_offset 53 /* bcc */ +#define SMB_ssetpLM_len 55 /* before data ... */ +#define SMB_ssetpLM_buf_offset 55 + +#define SMB_ssetpNTLM_mbs_offset 37 /* Max Buffer Size for NT LM 0.12 */ + /* and above */ +#define SMB_ssetpNTLM_mmc_offset 39 /* Max Multiplex count */ +#define SMB_ssetpNTLM_vcn_offset 41 /* VC Number */ +#define SMB_ssetpNTLM_snk_offset 43 /* Session key */ +#define SMB_ssetpNTLM_cipl_offset 47 /* Case Insensitive PW Len */ +#define SMB_ssetpNTLM_cspl_offset 49 /* Unicode pw len */ +#define SMB_ssetpNTLM_res_offset 51 /* reserved */ +#define SMB_ssetpNTLM_cap_offset 55 /* server capabilities */ +#define SMB_ssetpNTLM_bcc_offset 59 /* bcc */ +#define SMB_ssetpNTLM_len 61 /* before data */ +#define SMB_ssetpNTLM_buf_offset 61 + +#define SMB_ssetpr_axo_offset 35 /* Offset of next response ... */ +#define SMB_ssetpr_act_offset 37 /* action, bit 0 = 1 => guest */ +#define SMB_ssetpr_bcc_offset 39 /* bcc */ +#define SMB_ssetpr_buf_offset 41 /* Native OS etc */ + +/* The following two arrays need to be in step! */ +/* We must make it possible for callers to specify these ... */ + +extern const char *SMB_Prots[]; +extern int SMB_Types[]; + +typedef struct SMB_Connect_Def * SMB_Handle_Type; + +struct SMB_Connect_Def { + + SMB_Handle_Type Next_Con, Prev_Con; /* Next and previous conn */ + int protocol; /* What is the protocol */ + int prot_IDX; /* And what is the index */ + void *Trans_Connect; /* The connection */ + + /* All these strings should be malloc'd */ + + char service[80], username[80], password[80], desthost[80], sock_options[80]; + char address[80], myname[80]; + + int gid; /* Group ID, do we need it? */ + int mid; /* Multiplex ID? We might need one per con */ + int pid; /* Process ID */ + + int uid; /* Authenticated user id. */ + + /* It is pretty clear that we need to bust some of */ + /* these out into a per TCon record, as there may */ + /* be multiple TCon's per server, etc ... later */ + + int port; /* port to use in case not default, this is a TCPism! */ + + int max_xmit; /* Max xmit permitted by server */ + int Security; /* 0 = share, 1 = user */ + int Raw_Support; /* bit 0 = 1 = Read Raw supported, 1 = 1 Write raw */ + bool encrypt_passwords; /* false = don't */ + int MaxMPX, MaxVC, MaxRaw; + unsigned int SessionKey, Capabilities; + int SvrTZ; /* Server Time Zone */ + int Encrypt_Key_Len; + char Encrypt_Key[80], Domain[80], PDomain[80], OSName[80], LMType[40]; + char Svr_OS[80], Svr_LMType[80], Svr_PDom[80]; + +}; + +#define SMBLIB_DEFAULT_OSNAME "UNIX of some type" +#define SMBLIB_DEFAULT_LMTYPE "SMBlib LM2.1 minus a bit" +#define SMBLIB_MAX_XMIT 65535 + +/* global Variables for the library */ + +#ifndef SMBLIB_ERRNO +extern int SMBlib_errno; +extern int SMBlib_SMB_Error; /* last Error */ +#endif + +/* From smbdes.c. */ +void E_P16(unsigned char *, unsigned char *); +void E_P24(unsigned char *, unsigned char *, unsigned char *); + +/* From smblib-util.c. */ +void SMB_Get_My_Name(char *name, int len); + +/* From smbencrypt.c. */ +void SMBencrypt(unsigned char *passwd, unsigned char *, unsigned char *); diff --git a/authprogs/smbval/smblib-util.c b/authprogs/smbval/smblib-util.c new file mode 100644 index 0000000..27f5619 --- /dev/null +++ b/authprogs/smbval/smblib-util.c @@ -0,0 +1,332 @@ +/* UNIX SMBlib NetBIOS implementation + + Version 1.0 + SMBlib Utility Routines + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +#include "config.h" +#include "clibrary.h" + +#include "smblib-priv.h" +#include "rfcnb.h" + +/* The following two arrays need to be in step! */ +/* We must make it possible for callers to specify these ... */ + +const char *SMB_Prots[] = {"PC NETWORK PROGRAM 1.0", + "MICROSOFT NETWORKS 1.03", + "MICROSOFT NETWORKS 3.0", + "DOS LANMAN1.0", + "LANMAN1.0", + "DOS LM1.2X002", + "LM1.2X002", + "DOS LANMAN2.1", + "LANMAN2.1", + "Samba", + "NT LM 0.12", + "NT LANMAN 1.0", + NULL}; + +int SMB_Types[] = {SMB_P_Core, + SMB_P_CorePlus, + SMB_P_DOSLanMan1, + SMB_P_DOSLanMan1, + SMB_P_LanMan1, + SMB_P_DOSLanMan2, + SMB_P_LanMan2, + SMB_P_LanMan2_1, + SMB_P_LanMan2_1, + SMB_P_NT1, + SMB_P_NT1, + SMB_P_NT1, + -1}; + +/* Figure out what protocol was accepted, given the list of dialect strings */ +/* We offered, and the index back from the server. We allow for a user */ +/* supplied list, and assume that it is a subset of our list */ + +int SMB_Figure_Protocol(const char *dialects[], int prot_index) + +{ int i; + + if (dialects == SMB_Prots) { /* The jobs is easy, just index into table */ + + return(SMB_Types[prot_index]); + } + else { /* Search through SMB_Prots looking for a match */ + + for (i = 0; SMB_Prots[i] != NULL; i++) { + + if (strcmp(dialects[prot_index], SMB_Prots[i]) == 0) { /* A match */ + + return(SMB_Types[i]); + + } + + } + + /* If we got here, then we are in trouble, because the protocol was not */ + /* One we understand ... */ + + return(SMB_P_Unknown); + + } + +} + + +/* Negotiate the protocol we will use from the list passed in Prots */ +/* we return the index of the accepted protocol in NegProt, -1 indicates */ +/* none acceptible, and our return value is 0 if ok, <0 if problems */ + +int SMB_Negotiate(SMB_Handle_Type Con_Handle, const char *Prots[]) +{ + struct RFCNB_Pkt *pkt; + int prots_len, i, pkt_len, prot, alloc_len; + char *p; + + /* Figure out how long the prot list will be and allocate space for it */ + + prots_len = 0; + + for (i = 0; Prots[i] != NULL; i++) { + + prots_len = prots_len + strlen(Prots[i]) + 2; /* Account for null etc */ + + } + + /* The -1 accounts for the one byte smb_buf we have because some systems */ + /* don't like char msg_buf[] */ + + pkt_len = SMB_negp_len + prots_len; + + /* Make sure that the pkt len is long enough for the max response ... */ + /* Which is a problem, because the encryption key len eec may be long */ + + if (pkt_len < (SMB_hdr_wct_offset + (19 * 2) + 40)) { + + alloc_len = SMB_hdr_wct_offset + (19 * 2) + 40; + + } + else { + + alloc_len = pkt_len; + + } + + pkt = (struct RFCNB_Pkt *)RFCNB_Alloc_Pkt(alloc_len); + + if (pkt == NULL) { + + SMBlib_errno = SMBlibE_NoSpace; + return(SMBlibE_BAD); + + } + + /* Now plug in the bits we need */ + + memset(SMB_Hdr(pkt), 0, SMB_negp_len); + SIVAL(SMB_Hdr(pkt), SMB_hdr_idf_offset, SMB_DEF_IDF); /* Plunk in IDF */ + *(SMB_Hdr(pkt) + SMB_hdr_com_offset) = SMBnegprot; + SSVAL(SMB_Hdr(pkt), SMB_hdr_pid_offset, Con_Handle -> pid); + SSVAL(SMB_Hdr(pkt), SMB_hdr_tid_offset, 0); + SSVAL(SMB_Hdr(pkt), SMB_hdr_mid_offset, Con_Handle -> mid); + SSVAL(SMB_Hdr(pkt), SMB_hdr_uid_offset, Con_Handle -> uid); + *(SMB_Hdr(pkt) + SMB_hdr_wct_offset) = 0; + + SSVAL(SMB_Hdr(pkt), SMB_negp_bcc_offset, prots_len); + + /* Now copy the prot strings in with the right stuff */ + + p = (char *)(SMB_Hdr(pkt) + SMB_negp_buf_offset); + + for (i = 0; Prots[i] != NULL; i++) { + + *p = SMBdialectID; + strcpy(p + 1, Prots[i]); + p = p + strlen(Prots[i]) + 2; /* Adjust len of p for null plus dialectID */ + + } + + /* Now send the packet and sit back ... */ + + if (RFCNB_Send(Con_Handle -> Trans_Connect, pkt, pkt_len) < 0){ + + +#ifdef DEBUG + fprintf(stderr, "Error sending negotiate protocol\n"); +#endif + + RFCNB_Free_Pkt(pkt); + SMBlib_errno = -SMBlibE_SendFailed; /* Failed, check lower layer errno */ + return(SMBlibE_BAD); + + } + + /* Now get the response ... */ + + if (RFCNB_Recv(Con_Handle -> Trans_Connect, pkt, alloc_len) < 0) { + +#ifdef DEBUG + fprintf(stderr, "Error receiving response to negotiate\n"); +#endif + + RFCNB_Free_Pkt(pkt); + SMBlib_errno = -SMBlibE_RecvFailed; /* Failed, check lower layer errno */ + return(SMBlibE_BAD); + + } + + if (CVAL(SMB_Hdr(pkt), SMB_hdr_rcls_offset) != SMBC_SUCCESS) { /* Process error */ + +#ifdef DEBUG + fprintf(stderr, "SMB_Negotiate failed with errorclass = %i, Error Code = %i\n", + CVAL(SMB_Hdr(pkt), SMB_hdr_rcls_offset), + SVAL(SMB_Hdr(pkt), SMB_hdr_err_offset)); +#endif + + SMBlib_SMB_Error = IVAL(SMB_Hdr(pkt), SMB_hdr_rcls_offset); + RFCNB_Free_Pkt(pkt); + SMBlib_errno = SMBlibE_Remote; + return(SMBlibE_BAD); + + } + + if (SVAL(SMB_Hdr(pkt), SMB_negrCP_idx_offset) == 0xFFFF) { + +#ifdef DEBUG + fprintf(stderr, "None of our protocols was accepted ... "); +#endif + + RFCNB_Free_Pkt(pkt); + SMBlib_errno = SMBlibE_NegNoProt; + return(SMBlibE_BAD); + + } + + /* Now, unpack the info from the response, if any and evaluate the proto */ + /* selected. We must make sure it is one we like ... */ + + Con_Handle -> prot_IDX = prot = SVAL(SMB_Hdr(pkt), SMB_negrCP_idx_offset); + Con_Handle -> protocol = SMB_Figure_Protocol(Prots, prot); + + if (Con_Handle -> protocol == SMB_P_Unknown) { /* No good ... */ + + RFCNB_Free_Pkt(pkt); + SMBlib_errno = SMBlibE_ProtUnknown; + return(SMBlibE_BAD); + + } + + switch (CVAL(SMB_Hdr(pkt), SMB_hdr_wct_offset)) { + + case 0x01: /* No more info ... */ + + break; + + case 13: /* Up to and including LanMan 2.1 */ + + Con_Handle -> Security = SVAL(SMB_Hdr(pkt), SMB_negrLM_sec_offset); + Con_Handle -> encrypt_passwords = ((Con_Handle -> Security & SMB_sec_encrypt_mask) != 0x00); + Con_Handle -> Security = Con_Handle -> Security & SMB_sec_user_mask; + + Con_Handle -> max_xmit = SVAL(SMB_Hdr(pkt), SMB_negrLM_mbs_offset); + Con_Handle -> MaxMPX = SVAL(SMB_Hdr(pkt), SMB_negrLM_mmc_offset); + Con_Handle -> MaxVC = SVAL(SMB_Hdr(pkt), SMB_negrLM_mnv_offset); + Con_Handle -> Raw_Support = SVAL(SMB_Hdr(pkt), SMB_negrLM_rm_offset); + Con_Handle -> SessionKey = IVAL(SMB_Hdr(pkt), SMB_negrLM_sk_offset); + Con_Handle -> SvrTZ = SVAL(SMB_Hdr(pkt), SMB_negrLM_stz_offset); + Con_Handle -> Encrypt_Key_Len = SVAL(SMB_Hdr(pkt), SMB_negrLM_ekl_offset); + + p = (SMB_Hdr(pkt) + SMB_negrLM_buf_offset); + fprintf(stderr, "%p", (char *)(SMB_Hdr(pkt) + SMB_negrLM_buf_offset)); + memcpy(Con_Handle->Encrypt_Key, p, 8); + + p = (SMB_Hdr(pkt) + SMB_negrLM_buf_offset + Con_Handle -> Encrypt_Key_Len); + + strncpy(p, Con_Handle -> Svr_PDom, sizeof(Con_Handle -> Svr_PDom) - 1); + + break; + + case 17: /* NT LM 0.12 and LN LM 1.0 */ + + Con_Handle -> Security = SVAL(SMB_Hdr(pkt), SMB_negrNTLM_sec_offset); + Con_Handle -> encrypt_passwords = ((Con_Handle -> Security & SMB_sec_encrypt_mask) != 0x00); + Con_Handle -> Security = Con_Handle -> Security & SMB_sec_user_mask; + + Con_Handle -> max_xmit = IVAL(SMB_Hdr(pkt), SMB_negrNTLM_mbs_offset); + Con_Handle -> MaxMPX = SVAL(SMB_Hdr(pkt), SMB_negrNTLM_mmc_offset); + Con_Handle -> MaxVC = SVAL(SMB_Hdr(pkt), SMB_negrNTLM_mnv_offset); + Con_Handle -> MaxRaw = IVAL(SMB_Hdr(pkt), SMB_negrNTLM_mrs_offset); + Con_Handle -> SessionKey = IVAL(SMB_Hdr(pkt), SMB_negrNTLM_sk_offset); + Con_Handle -> SvrTZ = SVAL(SMB_Hdr(pkt), SMB_negrNTLM_stz_offset); + Con_Handle -> Encrypt_Key_Len = CVAL(SMB_Hdr(pkt), SMB_negrNTLM_ekl_offset); + + p = (SMB_Hdr(pkt) + SMB_negrNTLM_buf_offset ); + memcpy(Con_Handle -> Encrypt_Key, p, 8); + p = (SMB_Hdr(pkt) + SMB_negrNTLM_buf_offset + Con_Handle -> Encrypt_Key_Len); + + strncpy(p, Con_Handle -> Svr_PDom, sizeof(Con_Handle -> Svr_PDom) - 1); + + break; + + default: + +#ifdef DEBUG + fprintf(stderr, "Unknown NegProt response format ... Ignored\n"); + fprintf(stderr, " wct = %i\n", CVAL(SMB_Hdr(pkt), SMB_hdr_wct_offset)); +#endif + + break; + } + +#ifdef DEBUG + fprintf(stderr, "Protocol selected is: %i:%s\n", prot, Prots[prot]); +#endif + + RFCNB_Free_Pkt(pkt); + return(0); + +} + +/* Get our hostname */ + +void SMB_Get_My_Name(char *name, int len) + +{ + if (gethostname(name, len) < 0) { /* Error getting name */ + + strncpy(name, "unknown", len); + + /* Should check the error */ + +#ifdef DEBUG + fprintf(stderr, "gethostname in SMB_Get_My_Name returned error:"); + perror(""); +#endif + + } + + /* only keep the portion up to the first "." */ + + +} diff --git a/authprogs/smbval/smblib.c b/authprogs/smbval/smblib.c new file mode 100644 index 0000000..06c8509 --- /dev/null +++ b/authprogs/smbval/smblib.c @@ -0,0 +1,379 @@ +/* UNIX SMBlib NetBIOS implementation + + Version 1.0 + SMBlib Routines + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include + +int SMBlib_errno; +int SMBlib_SMB_Error; +#define SMBLIB_ERRNO +typedef unsigned char uchar; +#include "smblib-priv.h" + +#include "rfcnb.h" + +/* Initialize the SMBlib package */ + +int SMB_Init() + +{ + signal(SIGPIPE, SIG_IGN); /* Ignore these ... */ + + return 0; + +} + +int SMB_Term() + +{ + + return 0; + +} + +/* SMB_Connect_Server: Connect to a server, but don't negotiate protocol */ +/* or anything else ... */ + +SMB_Handle_Type SMB_Connect_Server(SMB_Handle_Type Con_Handle, + char *server, char *NTdomain) + +{ SMB_Handle_Type con; + char called[80], calling[80], *address; + int i; + + /* Get a connection structure if one does not exist */ + + con = Con_Handle; + + if (Con_Handle == NULL) { + + if ((con = (struct SMB_Connect_Def *)malloc(sizeof(struct SMB_Connect_Def))) == NULL) { + + + SMBlib_errno = SMBlibE_NoSpace; + return NULL; + } + + } + + /* Init some things ... */ + + strlcpy(con->service, "", sizeof(con->service)); + strlcpy(con->username, "", sizeof(con->username)); + strlcpy(con->password, "", sizeof(con->password)); + strlcpy(con->sock_options, "", sizeof(con->sock_options)); + strlcpy(con->address, "", sizeof(con->address)); + strlcpy(con->desthost, server, sizeof(con->desthost)); + strlcpy(con->PDomain, NTdomain, sizeof(con->PDomain)); + strlcpy(con->OSName, SMBLIB_DEFAULT_OSNAME, sizeof(con->OSName)); + strlcpy(con->LMType, SMBLIB_DEFAULT_LMTYPE, sizeof(con->LMType)); + + SMB_Get_My_Name(con -> myname, sizeof(con -> myname)); + + con -> port = 0; /* No port selected */ + + /* Get some things we need for the SMB Header */ + + con -> pid = getpid(); + con -> mid = con -> pid; /* This will do for now ... */ + con -> uid = 0; /* Until we have done a logon, no uid ... */ + con -> gid = getgid(); + + /* Now connect to the remote end, but first upper case the name of the + service we are going to call, sine some servers want it in uppercase */ + + for (i=0; i < strlen(server); i++) + called[i] = toupper(server[i]); + + called[strlen(server)] = 0; /* Make it a string */ + + for (i=0; i < strlen(con -> myname); i++) + calling[i] = toupper(con -> myname[i]); + + calling[strlen(con -> myname)] = 0; /* Make it a string */ + + if (strcmp(con -> address, "") == 0) + address = con -> desthost; + else + address = con -> address; + + con -> Trans_Connect = RFCNB_Call(called, + calling, + address, /* Protocol specific */ + con -> port); + + /* Did we get one? */ + + if (con -> Trans_Connect == NULL) { + + if (Con_Handle == NULL) { + Con_Handle = NULL; + free(con); + } + SMBlib_errno = -SMBlibE_CallFailed; + return NULL; + + } + + return(con); + +} + +/* Logon to the server. That is, do a session setup if we can. We do not do */ +/* Unicode yet! */ + +int SMB_Logon_Server(SMB_Handle_Type Con_Handle, char *UserName, + char *PassWord) + +{ struct RFCNB_Pkt *pkt; + int param_len, pkt_len, pass_len; + char *p, pword[128]; + + /* First we need a packet etc ... but we need to know what protocol has */ + /* been negotiated to figure out if we can do it and what SMB format to */ + /* use ... */ + + if (Con_Handle -> protocol < SMB_P_LanMan1) { + + SMBlib_errno = SMBlibE_ProtLow; + return(SMBlibE_BAD); + + } + + strlcpy(pword, PassWord, sizeof(pword)); + if (Con_Handle -> encrypt_passwords) + { + pass_len=24; + SMBencrypt((uchar *) PassWord, (uchar *)Con_Handle -> Encrypt_Key,(uchar *)pword); + } + else + pass_len=strlen(pword); + + + /* Now build the correct structure */ + + if (Con_Handle -> protocol < SMB_P_NT1) { + + param_len = strlen(UserName) + 1 + pass_len + 1 + + strlen(Con_Handle -> PDomain) + 1 + + strlen(Con_Handle -> OSName) + 1; + + pkt_len = SMB_ssetpLM_len + param_len; + + pkt = (struct RFCNB_Pkt *)RFCNB_Alloc_Pkt(pkt_len); + + if (pkt == NULL) { + + SMBlib_errno = SMBlibE_NoSpace; + return(SMBlibE_BAD); /* Should handle the error */ + + } + + memset(SMB_Hdr(pkt), 0, SMB_ssetpLM_len); + SIVAL(SMB_Hdr(pkt), SMB_hdr_idf_offset, SMB_DEF_IDF); /* Plunk in IDF */ + *(SMB_Hdr(pkt) + SMB_hdr_com_offset) = SMBsesssetupX; + SSVAL(SMB_Hdr(pkt), SMB_hdr_pid_offset, Con_Handle -> pid); + SSVAL(SMB_Hdr(pkt), SMB_hdr_tid_offset, 0); + SSVAL(SMB_Hdr(pkt), SMB_hdr_mid_offset, Con_Handle -> mid); + SSVAL(SMB_Hdr(pkt), SMB_hdr_uid_offset, Con_Handle -> uid); + *(SMB_Hdr(pkt) + SMB_hdr_wct_offset) = 10; + *(SMB_Hdr(pkt) + SMB_hdr_axc_offset) = 0xFF; /* No extra command */ + SSVAL(SMB_Hdr(pkt), SMB_hdr_axo_offset, 0); + + SSVAL(SMB_Hdr(pkt), SMB_ssetpLM_mbs_offset, SMBLIB_MAX_XMIT); + SSVAL(SMB_Hdr(pkt), SMB_ssetpLM_mmc_offset, 2); + SSVAL(SMB_Hdr(pkt), SMB_ssetpLM_vcn_offset, Con_Handle -> pid); + SIVAL(SMB_Hdr(pkt), SMB_ssetpLM_snk_offset, 0); + SSVAL(SMB_Hdr(pkt), SMB_ssetpLM_pwl_offset, pass_len + 1); + SIVAL(SMB_Hdr(pkt), SMB_ssetpLM_res_offset, 0); + SSVAL(SMB_Hdr(pkt), SMB_ssetpLM_bcc_offset, param_len); + + /* Now copy the param strings in with the right stuff */ + + p = (char *)(SMB_Hdr(pkt) + SMB_ssetpLM_buf_offset); + + /* Copy in password, then the rest. Password has a null at end */ + + memcpy(p, pword, pass_len); + + p = p + pass_len + 1; + + strcpy(p, UserName); + p = p + strlen(UserName); + *p = 0; + + p = p + 1; + + strcpy(p, Con_Handle -> PDomain); + p = p + strlen(Con_Handle -> PDomain); + *p = 0; + p = p + 1; + + strcpy(p, Con_Handle -> OSName); + p = p + strlen(Con_Handle -> OSName); + *p = 0; + + } + else { + + /* We don't admit to UNICODE support ... */ + + param_len = strlen(UserName) + 1 + pass_len + + strlen(Con_Handle -> PDomain) + 1 + + strlen(Con_Handle -> OSName) + 1 + + strlen(Con_Handle -> LMType) + 1; + + pkt_len = SMB_ssetpNTLM_len + param_len; + + pkt = (struct RFCNB_Pkt *)RFCNB_Alloc_Pkt(pkt_len); + + if (pkt == NULL) { + + SMBlib_errno = SMBlibE_NoSpace; + return(-1); /* Should handle the error */ + + } + + memset(SMB_Hdr(pkt), 0, SMB_ssetpNTLM_len); + SIVAL(SMB_Hdr(pkt), SMB_hdr_idf_offset, SMB_DEF_IDF); /* Plunk in IDF */ + *(SMB_Hdr(pkt) + SMB_hdr_com_offset) = SMBsesssetupX; + SSVAL(SMB_Hdr(pkt), SMB_hdr_pid_offset, Con_Handle -> pid); + SSVAL(SMB_Hdr(pkt), SMB_hdr_tid_offset, 0); + SSVAL(SMB_Hdr(pkt), SMB_hdr_mid_offset, Con_Handle -> mid); + SSVAL(SMB_Hdr(pkt), SMB_hdr_uid_offset, Con_Handle -> uid); + *(SMB_Hdr(pkt) + SMB_hdr_wct_offset) = 13; + *(SMB_Hdr(pkt) + SMB_hdr_axc_offset) = 0xFF; /* No extra command */ + SSVAL(SMB_Hdr(pkt), SMB_hdr_axo_offset, 0); + + SSVAL(SMB_Hdr(pkt), SMB_ssetpNTLM_mbs_offset, SMBLIB_MAX_XMIT); + SSVAL(SMB_Hdr(pkt), SMB_ssetpNTLM_mmc_offset, 0); + SSVAL(SMB_Hdr(pkt), SMB_ssetpNTLM_vcn_offset, 0); + SIVAL(SMB_Hdr(pkt), SMB_ssetpNTLM_snk_offset, 0); + SSVAL(SMB_Hdr(pkt), SMB_ssetpNTLM_cipl_offset, pass_len); + SSVAL(SMB_Hdr(pkt), SMB_ssetpNTLM_cspl_offset, 0); + SIVAL(SMB_Hdr(pkt), SMB_ssetpNTLM_res_offset, 0); + SIVAL(SMB_Hdr(pkt), SMB_ssetpNTLM_cap_offset, 0); + SSVAL(SMB_Hdr(pkt), SMB_ssetpNTLM_bcc_offset, param_len); + + /* Now copy the param strings in with the right stuff */ + + p = (char *)(SMB_Hdr(pkt) + SMB_ssetpNTLM_buf_offset); + + /* Copy in password, then the rest. Password has no null at end */ + + memcpy(p, pword, pass_len); + + p = p + pass_len; + + strcpy(p, UserName); + p = p + strlen(UserName); + *p = 0; + + p = p + 1; + + strcpy(p, Con_Handle -> PDomain); + p = p + strlen(Con_Handle -> PDomain); + *p = 0; + p = p + 1; + + strcpy(p, Con_Handle -> OSName); + p = p + strlen(Con_Handle -> OSName); + *p = 0; + p = p + 1; + + strcpy(p, Con_Handle -> LMType); + p = p + strlen(Con_Handle -> LMType); + *p = 0; + + } + + /* Now send it and get a response */ + + if (RFCNB_Send(Con_Handle -> Trans_Connect, pkt, pkt_len) < 0){ + + RFCNB_Free_Pkt(pkt); + SMBlib_errno = SMBlibE_SendFailed; + return(SMBlibE_BAD); + + } + + /* Now get the response ... */ + + if (RFCNB_Recv(Con_Handle -> Trans_Connect, pkt, pkt_len) < 0) { + + RFCNB_Free_Pkt(pkt); + SMBlib_errno = SMBlibE_RecvFailed; + return(SMBlibE_BAD); + + } + + /* Check out the response type ... */ + + if (CVAL(SMB_Hdr(pkt), SMB_hdr_rcls_offset) != SMBC_SUCCESS) { /* Process error */ + + SMBlib_SMB_Error = IVAL(SMB_Hdr(pkt), SMB_hdr_rcls_offset); + RFCNB_Free_Pkt(pkt); + SMBlib_errno = SMBlibE_Remote; + return(SMBlibE_BAD); + + } +/** @@@ mdz: check for guest login { **/ + if (SVAL(SMB_Hdr(pkt), SMB_ssetpr_act_offset) & 0x1) + { + /* do we allow guest login? NO! */ + return(SMBlibE_BAD); + + } + /** @@@ mdz: } **/ + + + /* Now pick up the UID for future reference ... */ + + Con_Handle -> uid = SVAL(SMB_Hdr(pkt), SMB_hdr_uid_offset); + RFCNB_Free_Pkt(pkt); + + return(0); + +} + + +/* Disconnect from the server, and disconnect all tree connects */ + +int SMB_Discon(SMB_Handle_Type Con_Handle, bool KeepHandle) + +{ + + /* We just disconnect the connection for now ... */ + + RFCNB_Hangup(Con_Handle -> Trans_Connect); + + if (!KeepHandle) + free(Con_Handle); + + return(0); + +} diff --git a/authprogs/smbval/smblib.h b/authprogs/smbval/smblib.h new file mode 100644 index 0000000..a93255f --- /dev/null +++ b/authprogs/smbval/smblib.h @@ -0,0 +1,50 @@ +/* UNIX SMBlib NetBIOS implementation + + Version 1.0 + SMBlib Defines + + Copyright (C) Richard Sharpe 1996 + +*/ + +/* + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. +*/ + +#include "smblib-common.h" + +/* Just define all the entry points */ + +/* Initialize the library. */ + +int SMB_Init(void); + +/* Connect to a server, but do not do a tree con etc ... */ + +void *SMB_Connect_Server(void *Con, char *server, char *NTdomain); + +/* Negotiate a protocol */ + +int SMB_Negotiate(void *Con_Handle, char *Prots[]); + +/* Disconnect from server. Has flag to specify whether or not we keep the */ +/* handle. */ + +int SMB_Discon(void *Con, bool KeepHandle); + +/* Log on to a server. */ + +int SMB_Logon_Server(SMB_Handle_Type Con_Handle, char *UserName, + char *PassWord); diff --git a/authprogs/smbval/valid.c b/authprogs/smbval/valid.c new file mode 100644 index 0000000..425b14f --- /dev/null +++ b/authprogs/smbval/valid.c @@ -0,0 +1,48 @@ +#include +#include +#include +#include "config.h" +#include "smblib-priv.h" +#include "smblib.h" +#include "valid.h" + +int Valid_User(char *USERNAME,char *PASSWORD,char *SERVER,char *BACKUP, char *DOMAIN) +{ + char *SMB_Prots[] = {"PC NETWORK PROGRAM 1.0", + "MICROSOFT NETWORKS 1.03", + "MICROSOFT NETWORKS 3.0", + "LANMAN1.0", + "LM1.2X002", + "Samba", + "NT LM 0.12", + "NT LANMAN 1.0", + NULL}; + SMB_Handle_Type con; + + SMB_Init(); + con = SMB_Connect_Server(NULL, SERVER, DOMAIN); + if (con == NULL) { /* Error ... */ + con = SMB_Connect_Server(NULL, BACKUP, DOMAIN); + if (con == NULL) { + return(NTV_SERVER_ERROR); + } + } + if (SMB_Negotiate(con, SMB_Prots) < 0) { /* An error */ + SMB_Discon(con,0); + return(NTV_PROTOCOL_ERROR); + } + /* Test for a server in share level mode do not authenticate against it */ + if (con -> Security == 0) + { + SMB_Discon(con,0); + return(NTV_PROTOCOL_ERROR); + } + + if (SMB_Logon_Server(con, USERNAME, PASSWORD) < 0) { + SMB_Discon(con,0); + return(NTV_LOGON_ERROR); + } + + SMB_Discon(con,0); + return(NTV_NO_ERROR); +} diff --git a/authprogs/smbval/valid.h b/authprogs/smbval/valid.h new file mode 100644 index 0000000..00d068b --- /dev/null +++ b/authprogs/smbval/valid.h @@ -0,0 +1,12 @@ +#ifndef _VALID_H_ +#define _VALID_H_ +/* SMB User verification function */ + +#define NTV_NO_ERROR 0 +#define NTV_SERVER_ERROR 1 +#define NTV_PROTOCOL_ERROR 2 +#define NTV_LOGON_ERROR 3 + +int Valid_User(char *USERNAME,char *PASSWORD,char *SERVER, char *BACKUP, char *DOMAIN); + +#endif diff --git a/backends/Makefile b/backends/Makefile new file mode 100644 index 0000000..f419e3f --- /dev/null +++ b/backends/Makefile @@ -0,0 +1,183 @@ +## $Id: Makefile 7734 2008-04-06 09:25:56Z iulius $ + +include ../Makefile.global + +top = .. +CFLAGS = $(GCFLAGS) + +ALL = actmerge actsync actsyncd archive batcher buffchan \ + cvtbatch filechan inndf innxmit innxbatch mod-active \ + news2mail ninpaths nntpget nntpsend overchan send-ihave \ + send-nntp send-uucp sendinpaths sendxbatches shlock \ + shrinkfile + +MAN = ../doc/man/send-uucp.8 + +SOURCES = actsync.c archive.c batcher.c buffchan.c cvtbatch.c \ + filechan.c inndf.c innxbatch.c innxmit.c map.c ninpaths.c \ + nntpget.c overchan.c shlock.c shrinkfile.c + +all: $(ALL) + +man: $(MAN) + +warnings: + $(MAKE) COPT='$(WARNINGS)' all + +install: all + for F in actmerge actsyncd news2mail nntpsend send-ihave send-nntp \ + send-uucp sendinpaths sendxbatches ; do \ + $(CP_XPUB) $$F $D$(PATHBIN)/$$F ; \ + done + $(CP_XPRI) mod-active $D$(PATHBIN)/mod-active + $(LI_XPRI) overchan $D$(PATHBIN)/overchan + for F in actsync archive batcher buffchan cvtbatch filechan inndf \ + innxbatch innxmit ninpaths nntpget shlock shrinkfile ; do \ + $(LI_XPUB) $$F $D$(PATHBIN)/$$F ; \ + done + +clean: + rm -f *.o $(ALL) + rm -rf .libs + +clobber distclean: clean + rm -f tags + +tags ctags: $(SOURCES) + $(CTAGS) $(SOURCES) + +profiled: + $(MAKEPROFILING) all + +## Compilation rules. + +BOTH = $(LIBSTORAGE) $(LIBHIST) $(LIBSTORAGE) $(LIBINN) + +LINK = $(LIBLD) $(LDFLAGS) -o $@ +INNLIBS = $(LIBINN) $(LIBS) +STORELIBS = $(BOTH) $(EXTSTORAGELIBS) $(LIBS) + +FIX = $(FIXSCRIPT) + +$(FIXSCRIPT): + @echo Run configure before running make. See INSTALL for details. + @exit 1 + +actsync: actsync.o $(LIBINN) ; $(LINK) actsync.o $(INNLIBS) +archive: archive.o $(BOTH) ; $(LINK) archive.o $(STORELIBS) +batcher: batcher.o $(BOTH) ; $(LINK) batcher.o $(STORELIBS) +cvtbatch: cvtbatch.o $(BOTH) ; $(LINK) cvtbatch.o $(STORELIBS) +inndf: inndf.o $(BOTH) ; $(LINK) inndf.o $(STORELIBS) +innxbatch: innxbatch.o $(LIBINN) ; $(LINK) innxbatch.o $(INNLIBS) +innxmit: innxmit.o $(BOTH) ; $(LINK) innxmit.o $(STORELIBS) +ninpaths: ninpaths.o ; $(LINK) ninpaths.o +nntpget: nntpget.o $(BOTH) ; $(LINK) nntpget.o $(STORELIBS) +overchan: overchan.o $(BOTH) ; $(LINK) overchan.o $(STORELIBS) +shlock: shlock.o $(LIBINN) ; $(LINK) shlock.o $(INNLIBS) +shrinkfile: shrinkfile.o $(LIBINN) ; $(LINK) shrinkfile.o $(INNLIBS) + +buffchan: buffchan.o map.o $(LIBINN) + $(LINK) buffchan.o map.o $(LIBINN) $(LIBS) + +filechan: filechan.o map.o $(LIBINN) + $(LINK) filechan.o map.o $(LIBINN) $(LIBS) + +actmerge: actmerge.in $(FIX) ; $(FIX) actmerge.in +actsyncd: actsyncd.in $(FIX) ; $(FIX) actsyncd.in +mod-active: mod-active.in $(FIX) ; $(FIX) mod-active.in +news2mail: news2mail.in $(FIX) ; $(FIX) news2mail.in +nntpsend: nntpsend.in $(FIX) ; $(FIX) nntpsend.in +send-ihave: send-ihave.in $(FIX) ; $(FIX) send-ihave.in +send-nntp: send-nntp.in $(FIX) ; $(FIX) send-nntp.in +send-uucp: send-uucp.in $(FIX) ; $(FIX) send-uucp.in +sendinpaths: sendinpaths.in $(FIX) ; $(FIX) sendinpaths.in +sendxbatches: sendxbatches.in $(FIX) ; $(FIX) sendxbatches.in + +$(LIBINN): ; (cd ../lib ; $(MAKE)) +$(LIBSTORAGE): ; (cd ../storage ; $(MAKE)) +$(LIBHIST): ; (cd ../history ; $(MAKE)) + +../doc/man/send-uucp.8: send-uucp + $(POD2MAN) -s 8 $? > $@ + + +## Dependencies. Default list, below, is probably good enough. + +depend: Makefile $(SOURCES) + $(MAKEDEPEND) '$(CFLAGS)' $(SOURCES) + +# DO NOT DELETE THIS LINE -- make depend depends on it. +actsync.o: actsync.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/portable/wait.h ../include/config.h ../include/inn/innconf.h \ + ../include/inn/defines.h ../include/inn/messages.h ../include/inn/qio.h \ + ../include/libinn.h ../include/paths.h +archive.o: archive.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/innconf.h ../include/inn/defines.h \ + ../include/inn/messages.h ../include/inn/wire.h ../include/libinn.h \ + ../include/paths.h ../include/storage.h +batcher.o: batcher.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/innconf.h ../include/inn/defines.h \ + ../include/inn/messages.h ../include/inn/timer.h ../include/libinn.h \ + ../include/paths.h ../include/storage.h +buffchan.o: buffchan.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/innconf.h ../include/inn/defines.h \ + ../include/inn/messages.h ../include/inn/qio.h ../include/libinn.h \ + ../include/paths.h map.h +cvtbatch.o: cvtbatch.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/innconf.h ../include/inn/defines.h \ + ../include/inn/messages.h ../include/inn/qio.h ../include/inn/wire.h \ + ../include/libinn.h ../include/paths.h ../include/storage.h +filechan.o: filechan.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/innconf.h ../include/inn/defines.h \ + ../include/inn/messages.h ../include/libinn.h ../include/paths.h map.h +inndf.o: inndf.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/innconf.h ../include/inn/defines.h \ + ../include/inn/messages.h ../include/inn/qio.h ../include/libinn.h \ + ../include/ov.h ../include/storage.h ../include/inn/history.h \ + ../include/paths.h +innxbatch.o: innxbatch.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/portable/socket.h ../include/config.h \ + ../include/portable/time.h ../include/inn/innconf.h \ + ../include/inn/defines.h ../include/inn/messages.h \ + ../include/inn/timer.h ../include/libinn.h ../include/nntp.h +innxmit.o: innxmit.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/portable/socket.h ../include/config.h \ + ../include/portable/time.h ../include/inn/history.h \ + ../include/inn/defines.h ../include/inn/innconf.h \ + ../include/inn/messages.h ../include/inn/qio.h ../include/inn/timer.h \ + ../include/inn/wire.h ../include/libinn.h ../include/nntp.h \ + ../include/paths.h ../include/storage.h +map.o: map.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/libinn.h ../include/paths.h map.h +ninpaths.o: ninpaths.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h +nntpget.o: nntpget.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/portable/socket.h ../include/config.h \ + ../include/portable/time.h ../include/inn/history.h \ + ../include/inn/defines.h ../include/inn/innconf.h \ + ../include/inn/messages.h ../include/libinn.h ../include/nntp.h \ + ../include/paths.h +overchan.o: overchan.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/portable/time.h ../include/config.h ../include/inn/innconf.h \ + ../include/inn/defines.h ../include/inn/messages.h ../include/inn/qio.h \ + ../include/libinn.h ../include/ov.h ../include/storage.h \ + ../include/inn/history.h ../include/paths.h +shlock.o: shlock.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/messages.h ../include/inn/defines.h +shrinkfile.o: shrinkfile.c ../include/config.h ../include/inn/defines.h \ + ../include/inn/system.h ../include/clibrary.h ../include/config.h \ + ../include/inn/innconf.h ../include/inn/defines.h \ + ../include/inn/messages.h ../include/libinn.h diff --git a/backends/actmerge.in b/backends/actmerge.in new file mode 100644 index 0000000..237a332 --- /dev/null +++ b/backends/actmerge.in @@ -0,0 +1,216 @@ +#! /bin/sh +# fixscript will replace this line with code to load innshellvars + +# @(#) $Id: actmerge.in 2674 1999-11-15 06:28:29Z rra $ +# @(#) Under RCS control in /usr/local/news/src/inn/local/RCS/actmerge.sh,v +# +# actmerge - merge two active files +# +# usage: +# actmerge [-s] ign1 ign2 host1 host2 +# +# -s - write status on stderr even if no fatal error +# ign1 - ignore file for host1 +# ign2 - ignore file for host2 +# host1 - 1st active file or host +# host2 - 2nd active file or host +# +# The merge of two active files are sent to stdout. The status is +# written to stderr. + +# By: Landon Curt Noll chongo@toad.com (chongo was here /\../\) +# +# Copyright (c) Landon Curt Noll, 1996. +# All rights reserved. +# +# Permission to use and modify is hereby granted so long as this +# notice remains. Use at your own risk. No warranty is implied. + +# preset vars +# + +# Our lock file +LOCK=${LOCKS}/LOCK.actmerge +# where actsync is located +ACTSYNC=${PATHBIN}/actsync +# exit value of actsync if unable to get an active file +NOSYNC=127 +# args used by actsync a fetch of an active file +FETCH="-b 0 -d 0 -g 0 -o aK -p 0 -q 12 -s 0 -t 0 -v 2" +# args used to merge two active files +MERGE="-b 0 -d 0 -g 0 -m -o aK -p 0 -q 12 -s 0 -t 0 -v 3" +# unless -q +QUIET=true + +# parse args +# +if [ $# -gt 1 ]; then + if [ X"-s" = X"$1" ]; then + QUIET= + shift + fi +fi +if [ $# -ne 4 ]; then + echo "usage: $0 ign1 ign2 host1 host2" 1>&2 + exit 1 +fi +ign1="$1" +if [ ! -s "$ign1" ]; then + echo "$0: host1 ignore file not found or empty: $ign1" 1>&2 + exit 2 +fi +ign2="$2" +if [ ! -s "$ign2" ]; then + echo "$0: host2 ignore file not found or empty: $ign2" 1>&2 + exit 3 +fi +host1="$3" +host2="$4" + + +# Lock out others +# +trap 'rm -f ${LOCK}; exit 1' 0 1 2 3 15 +shlock -p $$ -f ${LOCK} || { + echo "$0: Locked by `cat ${LOCK}`" 1>&2 + exit 4 +} + +# setup +# +tmp="$TMPDIR/.merge$$" +act1="$TMPDIR/.act1$$" +act2="$TMPDIR/.act2$$" +trap "rm -f $tmp ${LOCK} $act1 $act2; exit" 0 1 2 3 15 +rm -f "$tmp" +touch "$tmp" +chmod 0600 "$tmp" +rm -f "$act1" +touch "$act1" +chmod 0600 "$act1" +rm -f "$act2" +touch "$act2" +chmod 0600 "$act2" + +# try to fetch the first active file +# +echo "=-= fetching $host1" >>$tmp +eval "$ACTSYNC -i $ign1 $FETCH /dev/null $host1 > $act1 2>>$tmp" +status=$? +if [ "$status" -ne 0 ]; then + + # We failed on our first try, so we will trice knock 3 times after + # waiting 5 minutes. + # + for loop in 1 2 3; do + + # wait 5 minutes + sleep 300 + + # try #1 + eval "$ACTSYNC -i $ign1 $FETCH /dev/null $host1 > $act1 2>>$tmp" + status=$? + if [ "$status" -eq "$NOSYNC" ]; then + break; + fi + + # try #2 + eval "$ACTSYNC -i $ign1 $FETCH /dev/null $host1 > $act1 2>>$tmp" + status=$? + if [ "$status" -eq "$NOSYNC" ]; then + break; + fi + + # try #3 + eval "$ACTSYNC -i $ign1 $FETCH /dev/null $host1 > $act1 2>>$tmp" + status=$? + if [ "$status" -eq "$NOSYNC" ]; then + break; + fi + done + + # give up + # + if [ "$status" -ne 0 ]; then + echo "=-= `date` merge $host1 $host2 exit $status" 1>&2 + sed -e 's/^/ /' < "$tmp" 1>&2 + exit "$status" + fi +fi +if [ ! -s "$act1" ]; then + echo "$0: host1 active file not found or empty: $act1" 1>&2 + exit 5 +fi + +# try to fetch the second active file +# +echo "=-= fetching $host2" >>$tmp +eval "$ACTSYNC -i $ign2 $FETCH /dev/null $host2 > $act2 2>>$tmp" +status=$? +if [ "$status" -ne 0 ]; then + + # We failed on our first try, so we will trice knock 3 times after + # waiting 5 minutes. + # + for loop in 1 2 3; do + + # wait 5 minutes + sleep 300 + + # try #1 + eval "$ACTSYNC -i $ign2 $FETCH /dev/null $host2 > $act2 2>>$tmp" + status=$? + if [ "$status" -eq "$NOSYNC" ]; then + break; + fi + + # try #2 + eval "$ACTSYNC -i $ign2 $FETCH /dev/null $host2 > $act2 2>>$tmp" + status=$? + if [ "$status" -eq "$NOSYNC" ]; then + break; + fi + + # try #3 + eval "$ACTSYNC -i $ign2 $FETCH /dev/null $host2 > $act2 2>>$tmp" + status=$? + if [ "$status" -eq "$NOSYNC" ]; then + break; + fi + done + + # give up + # + if [ "$status" -ne 0 ]; then + echo "=-= `date` merge $host1 $host2 exit $status" 1>&2 + sed -e 's/^/ /' < "$tmp" 1>&2 + exit "$status" + fi +fi +if [ ! -s "$act2" ]; then + echo "$0: host2 active file not found or empty: $act2" 1>&2 + exit 6 +fi + +# merge the 2 active files to stdout +# +echo "=-= merging $host1 and $host2" >>$tmp +eval "$ACTSYNC $MERGE $act1 $act2" 2>>$tmp +status=$? +if [ "$status" -ne 0 ]; then + echo "=-= `date` merge $host1 $host2 exit $status" 1>&2 + sed -e 's/^/ /' < "$tmp" 1>&2 + exit "$status" +fi + +# if not -q, send status to stderr +# +if [ -z "$QUIET" ]; then + echo "=-= `date` merge $host1 $host2 successful" 1>&2 + sed -e 's/^/ /' < "$tmp" 1>&2 +fi + +# all done +# +rm -f "${LOCK}" +exit 0 diff --git a/backends/actsync.c b/backends/actsync.c new file mode 100644 index 0000000..41c35e0 --- /dev/null +++ b/backends/actsync.c @@ -0,0 +1,2766 @@ +/* @(#) $Id: actsync.c 6372 2003-05-31 19:48:28Z rra $ */ +/* @(#) Under RCS control in /usr/local/news/src/inn/local/RCS/actsync.c,v */ +/* + * actsync - sync or merge two active files + * + * usage: + * actsync [-b hostid][-d hostid][-g max][-i ignore_file][-I][-k][-l hostid] + * [-m][-n name][-o fmt][-p %][-q hostid][-s size] + * [-t hostid][-T][-v verbose_lvl][-z sec] + * [host1] host2 + * + * -A use authentication to server + * -b hostid ignore *.bork.bork.bork groups from: (def: -b 0) + * 0 from neither host + * 1 from host1 + * 2 from host2 + * 12 from host1 and host2 + * 21 from host1 and host2 + * -d hostid ignore groups with all numeric components (def: -d 0) + * -g max ignore group >max levels (0=dont ignore) (def: -g 0) + * -i ignore_file file with list/types of groups to ignore (def: no file) + * -I hostid ignore_file applies only to hostid (def: -I 12) + * -k keep host1 groups with errors (def: remove) + * -l hostid flag =group problems as errors (def: -l 12) + * -m merge, keep group not on host2 (def: sync) + * -n name name given to ctlinnd newgroup commands (def: actsync) + * -o fmt type of output: (def: -o c) + * a output groups in active format + * a1 like 'a', but output ignored non-err host1 grps + * ak like 'a', keep host2 hi/low values on new groups + * aK like 'a', use host2 hi/low values always + * c output in ctlinnd change commands + * x no output, safely exec ctlinnd commands + * xi no output, safely exec commands interactively + * -p % min % host1 lines unchanged allowed (def: -p 96) + * -q hostid silence errors from a host (see -b) (def: -q 0) + * -s size ignore names longer than size (0=no lim) (def: -s 0) + * -t hostid ignore bad top level groups from:(see -b) (def: -t 2) + * -T no new hierarchies (def: allow) + * -v verbose_lvl verbosity level (def: -v 0) + * 0 no debug or status reports + * 1 summary if work done + * 2 summary & actions (if exec output) only if done + * 3 summary & actions (if exec output) + * 4 debug output plus all -v 3 messages + * -z sec sleep sec seconds per exec if -o x (def: -z 4) + * host1 host to be changed (def: local server) + * host2 reference host used in merge + */ +/* + * By: Landon Curt Noll chongo@toad.com (chongo was here /\../\) + * + * Copyright (c) Landon Curt Noll, 1996. + * All rights reserved. + * + * Permission to use and modify is hereby granted so long as this + * notice remains. Use at your own risk. No warranty is implied. + */ + +#include "config.h" +#include "clibrary.h" +#include "portable/wait.h" +#include +#include +#include +#include +#include +#include +#include + +#include "inn/innconf.h" +#include "inn/messages.h" +#include "inn/qio.h" +#include "libinn.h" +#include "paths.h" + +static const char usage[] = "\ +Usage: actsync [-A][-b hostid][-d hostid][-i ignore_file][-I hostid][-k]\n\ + [-l hostid][-m][-n name][-o fmt][-p min_%_unchg][-q hostid]\n\ + [-s size][-t hostid][-T][-v verbose_lvl][-z sec]\n\ + [host1] host2\n\ +\n\ + -A use authentication to server\n\ + -b hostid ignore *.bork.bork.bork groups from: (def: -b 0)\n\ + 0 from neither host\n\ + 1 from host1\n\ + 2 from host2\n\ + 12 from host1 and host2\n\ + 21 from host1 and host2\n\ + -d hostid ignore grps with all numeric components (def: -d 0)\n\ + -g max ignore group >max levels (0=don't) (def: -g 0)\n\ + -i file file with groups to ignore (def: no file)\n\ + -I hostid ignore_file applies only to hostid (def: -I 12)\n\ + -k keep host1 groups with errors (def: remove)\n\ + -l hostid flag =group problems as errors (def: -l 12)\n\ + -m merge, keep group not on host2 (def: sync)\n\ + -n name name given to ctlinnd newgroup cmds (def: actsync)\n\ + -o fmt type of output: (def: -o c)\n\ + a output groups in active format\n\ + a1 like 'a', but output ignored non-err host1 grps\n\ + ak like 'a', keep host2 hi/low values on new groups\n\ + aK like 'a', use host2 hi/low values always\n\ + c output in ctlinnd change commands\n\ + x no output, safely exec ctlinnd commands\n\ + xi no output, safely exec commands interactively\n\ + -p % min % host1 lines unchanged allowed (def: -p 96)\n\ + -q hostid silence errors from a host (see -b) (def: -q 0)\n\ + -s size ignore names > than size (0=no lim) (def: -s 0)\n\ + -t hostid ignore bad top level grps from: (see -b)(def: -t 2)\n\ + -T no new hierarchies (def: allow)\n\ + -v level verbosity level (def: -v 0)\n\ + 0 no debug or status reports\n\ + 1 summary if work done\n\ + 2 summary & actions (if exec output) only if done\n\ + 3 summary & actions (if exec output)\n\ + 4 debug output plus all -v 3 messages\n\ + -z sec sleep sec seconds per exec if -o x (def: -z 4)\n\ +\n\ + host1 host to be changed (def: local server)\n\ + host2 reference host used in merge\n"; + + +/* + * pat - internal ignore/check pattern + * + * A pattern, derived from an ignore file, will determine if a group + * is will be checked if it is on both hosts or ignored altogether. + * + * The type related to the 4th field of an active file. Types may + * currently be one of [ymjnx=]. If '=' is one of the types, an + * optional equivalence pattern may be given in the 'epat' element. + * + * For example, to ignore "foo.bar.*", if it is junked or equated to + * a group of the form "alt.*.foo.bar.*": + * + * x.pat = "foo.bar.*"; + * x.type = "j="; + * x.epat = "alt.*.foo.bar.*"; + * x.ignore = 1; + * + * To further check "foo.bar.mod" if it is moderated: + * + * x.pat = "foo.bar.mod"; + * x.type = "m"; + * x.epat = NULL; + * x.ignore = 0; + * + * The 'i' value means ignore, 'c' value means 'compare'. The last pattern + * that matches a group determines the fate of the group. By default all + * groups are included. + */ +struct pat { + char *pat; /* newsgroup pattern */ + int type_match; /* 1 => match only if group type matches */ + int y_type; /* 1 => match if a 'y' type group */ + int m_type; /* 1 => match if a 'm' type group */ + int n_type; /* 1 => match if a 'n' type group */ + int j_type; /* 1 => match if a 'j' type group */ + int x_type; /* 1 => match if a 'x' type group */ + int eq_type; /* 1 => match if a 'eq' type group */ + char *epat; /* =pattern to match, if non-NULL and = is in type */ + int ignore; /* 0 => check matching group, 1 => ignore it */ +}; + +/* internal representation of an active line */ +struct grp { + int ignore; /* ignore reason, 0 => not ignore (see below) */ + int hostid; /* HOSTID this group is from */ + int linenum; /* >0 => active line number, <=0 => not a line */ + int output; /* 1 => output to produce the merged active file */ + int remove; /* 1 => remove this group */ + char *name; /* newsgroup name */ + char *hi; /* high article string */ + char *low; /* low article string */ + char *type; /* newsgroup type string */ + char *outhi; /* output high article string */ + char *outlow; /* output low article string */ + char *outtype; /* output newsgroup type string */ +}; + +/* structure used in the process of looking for =group type problems */ +struct eqgrp { + int skip; /* 1 => skip this entry */ + struct grp *g; /* =group that is being examined */ + char *eq; /* current equivalence name */ +}; + +/* + * These ignore reasons are listed in order severity; from mild to severe. + */ +#define NOT_IGNORED 0x0000 /* newsgroup has not been ignored */ +#define CHECK_IGNORE 0x0001 /* ignore file ignores this entry */ +#define CHECK_TYPE 0x0002 /* group type is ignored */ +#define CHECK_BORK 0x0004 /* group is a *.bork.bork.bork group */ +#define CHECK_HIER 0x0008 /* -T && new group's hierarchy does not exist */ +#define ERROR_LONGLOOP 0x0010 /* =name refers to long =grp chain or cycle */ +#define ERROR_EQLOOP 0x0020 /* =name refers to itself in some way */ +#define ERROR_NONEQ 0x0040 /* =name does not refer to a valid group */ +#define ERROR_DUP 0x0080 /* newsgroup is a duplicate of another */ +#define ERROR_EQNAME 0x0100 /* =name is a bad group name */ +#define ERROR_BADTYPE 0x0200 /* newsgroup type is invalid */ +#define ERROR_BADNAME 0x0400 /* newsgroup name is invalid */ +#define ERROR_FORMAT 0x0800 /* entry line is malformed */ + +#define IS_IGNORE(ign) ((ign) & (CHECK_IGNORE|CHECK_TYPE|CHECK_BORK|CHECK_HIER)) +#define IS_ERROR(ign) ((ign) & ~(CHECK_IGNORE|CHECK_TYPE|CHECK_BORK|CHECK_HIER)) + +#define NOHOST 0 /* neither host1 nor host2 */ +#define HOSTID1 1 /* entry from the first host */ +#define HOSTID2 2 /* entry from the second host */ + +#define CHUNK 5000 /* number of elements to alloc at a time */ + +#define TYPES "ymjnx=" /* group types (1st char of 4th active fld) */ +#define TYPECNT (sizeof(TYPES)-1) + +#define DEF_HI "0000000000" /* default hi string value for new groups */ +#define DEF_LOW "0000000001" /* default low string value for new groups */ +#define WATER_LEN 10 /* string length of hi/low water mark */ + +#define DEF_NAME "actsync" /* default name to use for ctlinnd newgroup */ + +#define MIN_UNCHG (double)96.0 /* min % of host1 lines unchanged allowed */ + +#define DEV_NULL "/dev/null" /* path to the bit bucket */ +#define CTLINND_NAME "ctlinnd" /* basename of ctlinnd command */ +#define CTLINND_TIME_OUT "-t30" /* seconds to wait before timeout */ + +#define READ_SIDE 0 /* read side of a pipe */ +#define WRITE_SIDE 1 /* write side of a pipe */ + +#define EQ_LOOP 16 /* give up if =eq loop/chain is this long */ +#define NOT_REACHED 127 /* exit value if unable to get active files */ + +#define NEWGRP_EMPTY 0 /* no new group dir was found */ +#define NEWGRP_NOCHG 1 /* new group dir found but no hi/low change */ +#define NEWGRP_CHG 2 /* new group dir found but no hi/low change */ + +/* -b macros */ +#define BORK_CHECK(hostid) \ + ((hostid == HOSTID1 && bork_host1_flag) || \ + (hostid == HOSTID2 && bork_host2_flag)) + +/* -d macros */ +#define NUM_CHECK(hostid) \ + ((hostid == HOSTID1 && num_host1_flag) || \ + (hostid == HOSTID2 && num_host2_flag)) + +/* -t macros */ +#define TOP_CHECK(hostid) \ + ((hostid == HOSTID1 && t_host1_flag) || \ + (hostid == HOSTID2 && t_host2_flag)) + +/* -o output types */ +#define OUTPUT_ACTIVE 1 /* output in active file format */ +#define OUTPUT_CTLINND 2 /* output in ctlinnd change commands */ +#define OUTPUT_EXEC 3 /* no output, safely exec commands */ +#define OUTPUT_IEXEC 4 /* no output, exec commands interactively */ + +/* -q macros */ +#define QUIET(hostid) \ + ((hostid == HOSTID1 && quiet_host1) || (hostid == HOSTID2 && quiet_host2)) + +/* -v verbosity level */ +#define VER_MIN 0 /* minimum -v level */ +#define VER_NONE 0 /* no -v output */ +#define VER_SUMM_IF_WORK 1 /* output summary if actions were performed */ +#define VER_REPT_IF_WORK 2 /* output summary & actions only if performed */ +#define VER_REPORT 3 /* output summary & actions performed */ +#define VER_FULL 4 /* output all summary, actins and debug */ +#define VER_MAX 4 /* maximum -v level */ +#define D_IF_SUMM (v_flag >= VER_SUMM_IF_WORK) /* true => give summary always */ +#define D_REPORT (v_flag >= VER_REPT_IF_WORK) /* true => give reports */ +#define D_BUG (v_flag == VER_FULL) /* true => debug processing */ +#define D_SUMMARY (v_flag >= VER_REPORT) /* true => give summary always */ + +/* flag and arg related defaults */ +int bork_host1_flag = 0; /* 1 => -b 1 or -b 12 or -b 21 given */ +int bork_host2_flag = 0; /* 1 => -b 2 or -b 12 or -b 21 given */ +int num_host1_flag = 0; /* 1 => -d 1 or -d 12 or -d 21 given */ +int num_host2_flag = 0; /* 1 => -d 2 or -d 12 or -d 21 given */ +char *ign_file = NULL; /* default ignore file */ +int ign_host1_flag = 1; /* 1 => -i ign_file applies to host1 */ +int ign_host2_flag = 1; /* 1 => -i ign_file applies to host2 */ +int g_flag = 0; /* ignore grps deeper than > g_flag, 0=>dont */ +int k_flag = 0; /* 1 => -k given */ +int l_host1_flag = HOSTID1; /* HOSTID1 => host1 =group error detection */ +int l_host2_flag = HOSTID2; /* HOSTID2 => host2 =group error detection */ +int m_flag = 0; /* 1 => merge active files, don't sync */ +const char *new_name = DEF_NAME; /* ctlinnd newgroup name */ +int o_flag = OUTPUT_CTLINND; /* default output type */ +double p_flag = MIN_UNCHG; /* min % host1 lines allowed to be unchanged */ +int host1_errs = 0; /* errors found in host1 active file */ +int host2_errs = 0; /* errors found in host2 active file */ +int quiet_host1 = 0; /* 1 => -q 1 or -q 12 or -q 21 given */ +int quiet_host2 = 0; /* 1 => -q 2 or -q 12 or -q 21 given */ +int s_flag = 0; /* max group size (length), 0 => do not check */ +int t_host1_flag = 0; /* 1 => -t 1 or -t 12 or -t 21 given */ +int t_host2_flag = 1; /* 1 => -t 2 or -d 12 or -t 21 given */ +int no_new_hier = 0; /* 1 => -T; no new hierarchies */ +int host2_hilow_newgrp = 0; /* 1 => use host2 hi/low on new groups */ +int host2_hilow_all = 0; /* 1 => use host2 hi/low on all groups */ +int host1_ign_print = 0; /* 1 => print host1 ignored groups too */ +int v_flag = 0; /* default verbosity level */ +int z_flag = 4; /* sleep z_flag sec per exec if -o x */ +int A_flag = 0; + +/* forward declarations */ +static struct grp *get_active(); /* get an active file from a remote host */ +static int bad_grpname(); /* test if string is a valid group name */ +static struct pat *get_ignore(); /* read in an ignore file */ +static void ignore(); /* ignore newsgroups given an ignore list */ +static int merge_cmp(); /* qsort compare for active file merge */ +static void merge_grps(); /* merge groups from active files */ +static int active_cmp(); /* qsort compare for active file output */ +static void output_grps(); /* output the merged groups */ +static void process_args(); /* process command line arguments */ +static void error_mark(); /* mark for removal, error grps from host */ +static int eq_merge_cmp(); /* qsort compare for =type grp processing */ +static int mark_eq_probs(); /* mark =type problems from a host */ +static int exec_cmd(); /* exec a ctlinnd command */ +static int new_top_hier(); /* see if we have a new top level */ + +int +main(argc, argv) + int argc; /* arg count */ + char *argv[]; /* the args */ +{ + struct grp *grp; /* struct grp array for host1 & host2 */ + struct pat *ignor; /* ignore list from ignore file */ + int grplen; /* length of host1/host2 group array */ + int iglen; /* length of ignore list */ + char *host1; /* host to change */ + char *host2; /* comparison host */ + + /* First thing, set up our identity. */ + message_program_name = "actsync"; + + /* Read in default info from inn.conf. */ + if (!innconf_read(NULL)) + exit(1); + process_args(argc, argv, &host1, &host2); + + /* obtain the active files */ + grp = get_active(host1, HOSTID1, &grplen, NULL, &host1_errs); + grp = get_active(host2, HOSTID2, &grplen, grp, &host2_errs); + + /* ignore groups from both active files, if -i */ + if (ign_file != NULL) { + + /* read in the ignore file */ + ignor = get_ignore(ign_file, &iglen); + + /* ignore groups */ + ignore(grp, grplen, ignor, iglen); + } + + /* compare groups from both hosts */ + merge_grps(grp, grplen, host1, host2); + + /* mark for removal, error groups from host1 if -e */ + if (! k_flag) { + + /* mark error groups for removal */ + error_mark(grp, grplen, HOSTID1); + } + + /* output result of merge */ + output_grps(grp, grplen); + + /* all done */ + exit(0); +} + +/* + * process_args - process the command line arguments + * + * given: + * argc arg count + * argv the args + * host1 name of first host (may be 2nd if -R) + * host2 name of second host2 *may be 1st if -R) + */ +static void +process_args(argc, argv, host1, host2) + int argc; /* arg count */ + char *argv[]; /* the arg array */ + char **host1; /* where to place name of host1 */ + char **host2; /* where to place name of host2 */ +{ + char *def_serv = NULL; /* name of default server */ + int i; + + /* parse args */ + while ((i = getopt(argc,argv,"Ab:d:g:i:I:kl:mn:o:p:q:s:t:Tv:z:")) != EOF) { + switch (i) { + case 'A': + A_flag = 1; + break; + case 'b': /* -b {0|1|2|12|21} */ + switch (atoi(optarg)) { + case 0: + bork_host1_flag = 0; + bork_host2_flag = 0; + break; + case 1: + bork_host1_flag = 1; + break; + case 2: + bork_host2_flag = 1; + break; + case 12: + case 21: + bork_host1_flag = 1; + bork_host2_flag = 1; + break; + default: + warn("-b option must be 0, 1, 2, 12, or 21"); + die("%s", usage); + } + break; + case 'd': /* -d {0|1|2|12|21} */ + switch (atoi(optarg)) { + case 0: + num_host1_flag = 0; + num_host2_flag = 0; + break; + case 1: + num_host1_flag = 1; + break; + case 2: + num_host2_flag = 1; + break; + case 12: + case 21: + num_host1_flag = 1; + num_host2_flag = 1; + break; + default: + warn("-d option must be 0, 1, 2, 12, or 21"); + die("%s", usage); + } + break; + case 'g': /* -g max */ + g_flag = atoi(optarg); + break; + case 'i': /* -i ignore_file */ + ign_file = optarg; + break; + case 'I': /* -I {0|1|2|12|21} */ + switch (atoi(optarg)) { + case 0: + ign_host1_flag = 0; + ign_host2_flag = 0; + break; + case 1: + ign_host1_flag = 1; + ign_host2_flag = 0; + break; + case 2: + ign_host1_flag = 0; + ign_host2_flag = 1; + break; + case 12: + case 21: + ign_host1_flag = 1; + ign_host2_flag = 1; + break; + default: + warn("-I option must be 0, 1, 2, 12, or 21"); + die("%s", usage); + } + break; + case 'k': /* -k */ + k_flag = 1; + break; + case 'l': /* -l {0|1|2|12|21} */ + switch (atoi(optarg)) { + case 0: + l_host1_flag = NOHOST; + l_host2_flag = NOHOST; + break; + case 1: + l_host1_flag = HOSTID1; + l_host2_flag = NOHOST; + break; + case 2: + l_host1_flag = NOHOST; + l_host2_flag = HOSTID2; + break; + case 12: + case 21: + l_host1_flag = HOSTID1; + l_host2_flag = HOSTID2; + break; + default: + warn("-l option must be 0, 1, 2, 12, or 21"); + die("%s", usage); + } + break; + case 'm': /* -m */ + m_flag = 1; + break; + case 'n': /* -n name */ + new_name = optarg; + break; + case 'o': /* -o out_type */ + switch (optarg[0]) { + case 'a': + o_flag = OUTPUT_ACTIVE; + switch (optarg[1]) { + case '1': + switch(optarg[2]) { + case 'K': /* -o a1K */ + host1_ign_print = 1; + host2_hilow_all = 1; + host2_hilow_newgrp = 1; + break; + case 'k': /* -o a1k */ + host1_ign_print = 1; + host2_hilow_newgrp = 1; + break; + default: /* -o a1 */ + host1_ign_print = 1; + break; + } + break; + case 'K': + switch(optarg[2]) { + case '1': /* -o aK1 */ + host1_ign_print = 1; + host2_hilow_all = 1; + host2_hilow_newgrp = 1; + break; + default: /* -o aK */ + host2_hilow_all = 1; + host2_hilow_newgrp = 1; + break; + }; + break; + case 'k': + switch(optarg[2]) { + case '1': /* -o ak1 */ + host1_ign_print = 1; + host2_hilow_newgrp = 1; + break; + default: /* -o ak */ + host2_hilow_newgrp = 1; + break; + }; + break; + case '\0': /* -o a */ + break; + default: + warn("-o type must be a, a1, ak, aK, ak1, or aK1"); + die("%s", usage); + } + break; + case 'c': + o_flag = OUTPUT_CTLINND; + break; + case 'x': + if (optarg[1] == 'i') { + o_flag = OUTPUT_IEXEC; + } else { + o_flag = OUTPUT_EXEC; + } + break; + default: + warn("-o type must be a, a1, ak, aK, ak1, aK1, c, x, or xi"); + die("%s", usage); + } + break; + case 'p': /* -p %_min_host1_change */ + /* parse % into [0,100] */ + p_flag = atof(optarg); + if (p_flag > (double)100.0) { + p_flag = (double)100.0; + } else if (p_flag < (double)0.0) { + p_flag = (double)0.0; + } + break; + case 'q': /* -q {0|1|2|12|21} */ + switch (atoi(optarg)) { + case 0: + quiet_host1 = 0; + quiet_host2 = 0; + break; + case 1: + quiet_host1 = 1; + break; + case 2: + quiet_host2 = 1; + break; + case 12: + case 21: + quiet_host1 = 1; + quiet_host2 = 1; + break; + default: + warn("-q option must be 0, 1, 2, 12, or 21"); + die("%s", usage); + } + break; + case 's': /* -s size */ + s_flag = atoi(optarg); + break; + case 't': /* -t {0|1|2|12|21} */ + switch (atoi(optarg)) { + case 0: + t_host1_flag = NOHOST; + t_host2_flag = NOHOST; + break; + case 1: + t_host1_flag = HOSTID1; + t_host2_flag = NOHOST; + break; + case 2: + t_host1_flag = NOHOST; + t_host2_flag = HOSTID2; + break; + case 12: + case 21: + t_host1_flag = HOSTID1; + t_host2_flag = HOSTID2; + break; + default: + warn("-t option must be 0, 1, 2, 12, or 21"); + die("%s", usage); + } + break; + case 'T': /* -T */ + no_new_hier = 1; + break; + case 'v': /* -v verbose_lvl */ + v_flag = atoi(optarg); + if (v_flag < VER_MIN || v_flag > VER_MAX) { + warn("-v level must be >= %d and <= %d", VER_MIN, VER_MAX); + die("%s", usage); + } + break; + case 'z': /* -z sec */ + z_flag = atoi(optarg); + break; + default: + warn("unknown flag"); + die("%s", usage); + } + } + + /* process the remaining args */ + argc -= optind; + argv += optind; + *host1 = NULL; + switch (argc) { + case 1: + /* assume host1 is the local server */ + *host2 = argv[0]; + break; + case 2: + *host1 = argv[0]; + *host2 = argv[1]; + break; + default: + warn("expected 1 or 2 host args, found %d", argc); + die("%s", usage); + } + + /* determine default host name if needed */ + if (*host1 == NULL || strcmp(*host1, "-") == 0) { + def_serv = innconf->server; + *host1 = def_serv; + } + if (*host2 == NULL || strcmp(*host2, "-") == 0) { + def_serv = innconf->server; + *host2 = def_serv; + } + if (*host1 == NULL || *host2 == NULL) + die("unable to determine default server name"); + if (D_BUG && def_serv != NULL) + warn("STATUS: using default server: %s", def_serv); + + /* processing done */ + return; +} + +/* + * get_active - get an active file from a host + * + * given: + * host host to contact or file to read, NULL => local server + * hostid HOST_ID of host + * len pointer to length of grp return array + * grp existing host array to add, or NULL + * errs count of lines that were found to have some error + * + * returns; + * Pointer to an array of grp structures describing each active entry. + * Does not return on fatal error. + * + * If host starts with a '/' or '.', then it is assumed to be a local file. + * In that case, the local file is opened and read. + */ +static struct grp * +get_active(host, hostid, len, grp, errs) + char *host; /* the host to contact */ + int hostid; /* HOST_ID of host */ + int *len; /* length of returned grp array in elements */ + struct grp* grp; /* existing group array or NULL */ + int *errs; /* line error count */ +{ + FILE *active; /* stream for fetched active data */ + FILE *FromServer; /* stream from server */ + FILE *ToServer; /* stream to server */ + QIOSTATE *qp; /* QIO active state */ + char buff[8192+1]; /* QIO buffer */ + char *line; /* the line just read */ + struct grp *ret; /* array of groups to return */ + struct grp *cur; /* current grp entry being formed */ + int max; /* max length of ret */ + int cnt; /* number of entries read */ + int ucnt; /* number of entries to be used */ + int namelen; /* length of newsgroup name */ + int is_file; /* 1 => host is actually a filename */ + int num_check; /* true => check for all numeric components */ + char *rhost; + int rport; + char *p; + int i; + + /* firewall */ + if (len == NULL) + die("internal error #1: len is NULL"); + if (errs == NULL) + die("internal error #2: errs in NULL"); + if (D_BUG) + warn("STATUS: obtaining active file from %s", host); + + /* setup return array if needed */ + if (grp == NULL) { + ret = xmalloc(CHUNK * sizeof(struct grp)); + max = CHUNK; + *len = 0; + + /* or prep to use the existing array */ + } else { + ret = grp; + max = ((*len + CHUNK-1)/CHUNK)*CHUNK; + } + + /* check for host being a filename */ + if (host != NULL && (host[0] == '/' || host[0] == '.')) { + + /* note that host is actually a file */ + is_file = 1; + + /* setup to read the local file quickly */ + if ((qp = QIOopen(host)) == NULL) + sysdie("cannot open active file"); + + /* case: host is a hostname or NULL (default server) */ + } else { + + /* note that host is actually a hostname or NULL */ + is_file = 0; + + /* prepare remote host variables */ + if ((p = strchr(host, ':')) != NULL) { + rport = atoi(p + 1); + *p = '\0'; + rhost = xstrdup(host); + *p = ':'; + } else { + rhost = xstrdup(host); + rport = NNTP_PORT; + } + + /* open a connection to the server */ + buff[0] = '\0'; + if (NNTPconnect(rhost, rport, &FromServer, &ToServer, buff) < 0) + die("cannot connect to server: %s", + buff[0] ? buff : strerror(errno)); + + if (A_flag && NNTPsendpassword(rhost, FromServer, ToServer) < 0) + die("cannot authenticate to server"); + + free(rhost); + + /* get the active data from the server */ + active = CAlistopen(FromServer, ToServer, NULL); + if (active == NULL) + sysdie("cannot retrieve data"); + + /* setup to read the retrieved data quickly */ + if ((qp = QIOfdopen((int)fileno(active))) == NULL) + sysdie("cannot read temp file"); + } + + /* scan server's output, processing appropriate lines */ + num_check = NUM_CHECK(hostid); + for (cnt=0, ucnt=0; (line = QIOread(qp)) != NULL; ++(*len), ++cnt) { + + /* expand return array if needed */ + if (*len >= max) { + max += CHUNK; + ret = xrealloc(ret, sizeof(struct grp) * max); + } + + /* setup the next return element */ + cur = &ret[*len]; + cur->ignore = NOT_IGNORED; + cur->hostid = hostid; + cur->linenum = cnt+1; + cur->output = 0; + cur->remove = 0; + cur->name = NULL; + cur->hi = NULL; + cur->low = NULL; + cur->type = NULL; + cur->outhi = NULL; + cur->outlow = NULL; + cur->outtype = NULL; + + /* obtain a copy of the current line */ + cur->name = xstrdup(line); + + /* get the group name */ + if ((p = strchr(cur->name, ' ')) == NULL) { + if (!QUIET(hostid)) + warn("line %d from %s is malformed, skipping line", cnt + 1, + host); + + /* don't form an entry for this group */ + --(*len); + continue; + } + *p = '\0'; + namelen = p - cur->name; + + /* find the other 3 fields, ignore if not found */ + cur->hi = p+1; + if ((p = strchr(p + 1, ' ')) == NULL) { + if (!QUIET(hostid)) + warn("skipping malformed line %d (field 2) from %s", cnt + 1, + host); + + /* don't form an entry for this group */ + --(*len); + continue; + } + *p = '\0'; + cur->low = p+1; + if ((p = strchr(p + 1, ' ')) == NULL) { + if (!QUIET(hostid)) + warn("skipping malformed line %d (field 3) from %s", cnt + 1, + host); + + /* don't form an entry for this group */ + --(*len); + continue; + } + *p = '\0'; + cur->type = p+1; + if ((p = strchr(p + 1, ' ')) != NULL) { + if (!QUIET(hostid)) + warn("skipping line %d from %s, it has more than 4 fields", + cnt + 1, host); + + /* don't form an entry for this group */ + --(*len); + continue; + } + + /* check for bad group name */ + if (bad_grpname(cur->name, num_check)) { + if (!QUIET(hostid)) + warn("line %d <%s> from %s has a bad newsgroup name", + cnt + 1, cur->name, host); + cur->ignore |= ERROR_BADNAME; + continue; + } + + /* check for long name if requested */ + if (s_flag > 0 && strlen(cur->name) > (size_t)s_flag) { + if (!QUIET(hostid)) + warn("line %d <%s> from %s has a name that is too long", + cnt + 1, cur->name, host); + cur->ignore |= ERROR_BADNAME; + continue; + } + + /* look for only a bad top level element if the proper -t was given */ + if (TOP_CHECK(hostid)) { + + /* look for a '.' in the name */ + if (strcmp(cur->name, "junk") != 0 && + strcmp(cur->name, "control") != 0 && + strcmp(cur->name, "to") != 0 && + strcmp(cur->name, "test") != 0 && + strcmp(cur->name, "general") != 0 && + strchr(cur->name, '.') == NULL) { + if (!QUIET(hostid)) + warn("line %d <%s> from %s is an invalid top level name", + cnt + 1, cur->name, host); + cur->ignore |= ERROR_BADNAME; + continue; + } + } + + /* look for *.bork.bork.bork groups if the proper -b was given */ + if (BORK_CHECK(cur->hostid)) { + int elmlen; /* length of element */ + char *q; /* beyond end of element */ + + /* scan the name backwards */ + q = &(cur->name[namelen]); + for (p = &(cur->name[namelen-1]); p >= cur->name; --p) { + /* if '.', see if this is a bork element */ + if (*p == '.') { + /* see if the bork element is short enough */ + elmlen = q-p; + if (3*elmlen <= q-cur->name) { + /* look for a triple match */ + if (strncmp(p,p-elmlen,elmlen) == 0 && + strncmp(p,p-(elmlen*2),elmlen) == 0) { + /* found a *.bork.bork.bork group */ + cur->ignore |= CHECK_BORK; + break; + } + } + /* note the end of a new element */ + q = p; + } + } + } + + /* + * check for bad chars in the hi water mark + */ + for (p=cur->hi, i=0; *p && isascii(*p) && isdigit((int)*p); ++p, ++i) { + } + if (*p) { + if (!QUIET(hostid)) + warn("line %d <%s> from %s has non-digits in hi water", + cnt + 1, cur->name, cur->hi); + cur->ignore |= ERROR_FORMAT; + continue; + } + + /* + * check for excessive hi water length + */ + if (i > WATER_LEN) { + if (!QUIET(hostid)) + warn("line %d <%s> from %s hi water len: %d < %d", + cnt + 1, cur->name, cur->hi, i, WATER_LEN); + cur->ignore |= ERROR_FORMAT; + continue; + } + + /* + * if the hi water length is too small, malloc and resize + */ + if (i != WATER_LEN) { + p = xmalloc(WATER_LEN + 1); + memcpy(p, cur->hi, ((i > WATER_LEN) ? WATER_LEN : i)+1); + } + + /* + * check for bad chars in the low water mark + */ + for (p=cur->low, i=0; *p && isascii(*p) && isdigit((int)*p); ++p, ++i) { + } + if (*p) { + if (!QUIET(hostid)) + warn("line %d <%s> from %s has non-digits in low water", + cnt + 1, cur->name, cur->low); + cur->ignore |= ERROR_FORMAT; + continue; + } + + /* + * check for excessive low water length + */ + if (i > WATER_LEN) { + if (!QUIET(hostid)) + warn("line %d <%s> from %s low water len: %d < %d", + cnt + 1, cur->name, cur->hi, i, WATER_LEN); + cur->ignore |= ERROR_FORMAT; + continue; + } + + /* + * if the low water length is too small, malloc and resize + */ + if (i != WATER_LEN) { + p = xmalloc(WATER_LEN + 1); + memcpy(p, cur->low, ((i > WATER_LEN) ? WATER_LEN : i)+1); + } + + /* check for a bad group type */ + switch (cur->type[0]) { + case 'y': + /* of COURSE: collabra has incompatible flags. but it */ + /* looks like they can be fixed easily enough. */ + if (cur->type[1] == 'g') { + cur->type[1] = '\0'; + } + case 'm': + case 'j': + case 'n': + case 'x': + if (cur->type[1] != '\0') { + if (!QUIET(hostid)) + warn("line %d <%s> from %s has a bad newsgroup type", + cnt + 1, cur->name, host); + cur->ignore |= ERROR_BADTYPE; + } + break; + case '=': + if (cur->type[1] == '\0') { + if (!QUIET(hostid)) + warn("line %d <%s> from %s has an empty =group name", + cnt + 1, cur->name, host); + cur->ignore |= ERROR_BADTYPE; + } + break; + default: + if (!QUIET(hostid)) + warn("line %d <%s> from %s has an unknown newsgroup type", + cnt + 1, cur->name, host); + cur->ignore |= ERROR_BADTYPE; + break; + } + if (cur->ignore & ERROR_BADTYPE) { + continue; + } + + /* if an = type, check for bad = name */ + if (cur->type[0] == '=' && bad_grpname(&(cur->type[1]), num_check)) { + if (!QUIET(hostid)) + warn("line %d <%s> from %s is equivalenced to a bad name:" + " <%s>", cnt+1, cur->name, host, + (cur->type) ? cur->type : "NULL"); + cur->ignore |= ERROR_EQNAME; + continue; + } + + /* if an = type, check for long = name if requested */ + if (cur->type[0] == '=' && s_flag > 0 && + strlen(&(cur->type[1])) > (size_t)s_flag) { + if (!QUIET(hostid)) + warn("line %d <%s> from %s is equivalenced to a long name:" + " <%s>", cnt+1, cur->name, host, + (cur->type) ? cur->type : "NULL"); + cur->ignore |= ERROR_EQNAME; + continue; + } + + /* count this entry which will be used */ + ++ucnt; + } + if (D_BUG) + warn("STATUS: read %d groups, will merge %d groups from %s", + cnt, ucnt, host); + + /* count the errors */ + *errs = cnt - ucnt; + if (D_BUG) + warn("STATUS: found %d line errors from %s", *errs, host); + + /* determine why we stopped */ + if (QIOerror(qp)) + sysdie("cannot read temp file for %s at line %d", host, cnt); + else if (QIOtoolong(qp)) + sysdie("line %d from host %s is too long", cnt, host); + + /* all done */ + if (is_file) { + QIOclose(qp); + } else { + CAclose(); + fprintf(ToServer, "quit\r\n"); + fclose(ToServer); + fgets(buff, sizeof buff, FromServer); + fclose(FromServer); + } + return ret; +} + +/* + * bad_grpname - test if the string is a valid group name + * + * Newsgroup names must consist of only alphanumeric chars and + * characters from the following regular expression: + * + * [.+-_] + * + * One cannot have two '.'s in a row. The first character must be + * alphanumeric. The character following a '.' must be alphanumeric. + * The name cannot end in a '.' character. + * + * If we are checking for all numeric compnents, (see num_chk) then + * a component cannot be all numeric. I.e,. there must be a non-numeric + * character in the name, there must be a non-numeric character between + * the start and the first '.', there must be a non-numeric character + * between two '.'s anmd there must be a non-numeric character between + * the last '.' and the end. + * + * given: + * name newsgroup name to check + * num_chk true => all numeric newsgroups components are invalid + * false => do not check for numeric newsgroups + * + * returns: + * 0 group is ok + * 1 group is bad + */ +static int +bad_grpname(name, num_chk) + char *name; /* newsgroup name to check */ + int num_chk; /* true => check for numeric newsgroup */ +{ + char *p; + int non_num; /* true => found a non-numeric, non-. character */ + int level; /* group levels (.'s) */ + + /* firewall */ + if (name == NULL) { + return 1; + } + + /* must start with a alpha numeric ascii character */ + if (!isascii(name[0])) { + return 1; + } + /* set non_num as needed */ + if (isalpha((int)name[0])) { + non_num = true; + } else if ((int)isdigit((int)name[0])) { + non_num = false; + } else { + return 1; + } + + /* scan each char */ + level = 0; + for (p=name+1; *p; ++p) { + + /* name must contain ASCII chars */ + if (!isascii(*p)) { + return 1; + } + + /* alpha chars are ok */ + if (isalpha((int)*p)) { + non_num = true; + continue; + } + + /* numeric chars are ok */ + if (isdigit((int)*p)) { + continue; + } + + /* +, - and _ are ok */ + if (*p == '+' || *p == '-' || *p == '_') { + non_num = true; + continue; + } + + /* check for the '.' case */ + if (*p == '.') { + /* + * look for groups that are too deep, if requested by -g + */ + if (g_flag > 0 && ++level > g_flag) { + /* we are too deep */ + return 1; + } + + /* + * A '.' is ok as long as the next character is alphanumeric. + * This imples that '.' cannot before a previous '.' and + * that it cannot be at the end. + * + * If we are checking for all numeric compnents, then + * '.' is ok if we saw a non-numeric char before the + * last '.', or before the beginning if no previous '.' + * has been seen. + */ + if ((!num_chk || non_num) && isascii(*(p+1)) && isalnum((int)*(p+1))) { + ++p; /* '.' is ok, and so is the next char */ + if (isdigit((int)*p)) { /* reset non_num as needed */ + non_num = false; + } else { + non_num = true; + } + continue; + } + } + + /* this character must be invalid */ + return 1; + } + if (num_chk && !non_num) { + /* last component is all numeric */ + return 1; + } + + /* the name must be ok */ + return 0; +} + +/* + * get_ignore - get the ignore list from an ignore file + * + * given: + * filename name of the ignore file to read + * *len pointer to length of ignore return array + * + * returns: + * returns a malloced ignore pattern array, changes len + * + * An ignore file is of the form: + * + * # this is a comment which is ignored + * # comments begin at the first # character + * # comments may follow text on the same line + * + * # blank lines are ignored too + * + * # lines are [ic] pattern [ type] ... + * i foo.* # ignore foo.* groups, + * c foo.bar m # but check foo.bar if moderated + * c foo.keep.* # and check foo.keep.* + * i foo.keep.* j =alt.* # except when foo.keep.* is junked + * # or equivalenced to an alt.* group + * + * The 'i' value means ignore, 'c' value means 'compare'. The last pattern + * that matches a group determines the fate of the group. By default all + * groups are included. + * + * NOTE: Only one '=name' is allowed per line. + * "=" is considered to be equivalent to "=*". + */ +static struct pat * +get_ignore(filename, len) + char *filename; /* name of the ignore file to read */ + int *len; /* length of return array */ +{ + QIOSTATE *qp; /* QIO ignore file state */ + char *line; /* the line just read */ + struct pat *ret; /* array of ignore patterns to return */ + struct pat *cur; /* current pattern entry being formed */ + int max; /* max length (in elements) of ret */ + int linenum; /* current line number */ + char *p; + int i; + + /* firewall */ + if (filename == NULL) + die("internal error #3: filename is NULL"); + if (len == NULL) + die("internal error #4: len is NULL"); + if (D_BUG) + warn("STATUS: reading ignore file %s", filename); + + /* setup return array */ + ret = xmalloc(CHUNK * sizeof(struct grp)); + max = CHUNK; + + /* setup to read the ignore file data quickly */ + if ((qp = QIOopen(filename)) == NULL) + sysdie("cannot read ignore file %s", filename); + + /* scan server's output, displaying appropriate lines */ + *len = 0; + for (linenum = 1; (line = QIOread(qp)) != NULL; ++linenum) { + + /* expand return array if needed */ + if (*len >= max) { + max += CHUNK; + ret = xrealloc(ret, sizeof(struct pat) * max); + } + + /* remove any trailing comments */ + p = strchr(line, '#'); + if (p != NULL) { + *p = '\0'; + } + + /* remove any trailing spaces and tabs */ + for (p = &line[strlen(line)-1]; + p >= line && (*p == ' ' || *p == '\t'); + --p) { + *p = '\0'; + } + + /* ignore line if the remainder of the line is empty */ + if (line[0] == '\0') { + continue; + } + + /* ensure that the line starts with an i or c token */ + if ((line[0] != 'i' && line[0] != 'c') || + (line[1] != ' ' && line[1] != '\t')) + die("first token is not i or c in line %d of %s", linenum, + filename); + + /* ensure that the second newsgroup pattern token follows */ + p = strtok(line+2, " \t"); + if (p == NULL) + die("did not find 2nd field in line %d of %s", linenum, + filename); + + /* setup the next return element */ + cur = &ret[*len]; + cur->pat = NULL; + cur->type_match = 0; + cur->y_type = 0; + cur->m_type = 0; + cur->n_type = 0; + cur->j_type = 0; + cur->x_type = 0; + cur->eq_type = 0; + cur->epat = NULL; + cur->ignore = (line[0] == 'i'); + + /* obtain a copy of the newsgroup pattern token */ + cur->pat = xstrdup(p); + + /* process any other type tokens */ + for (p=strtok(NULL, " \t"), i=3; + p != NULL; + p=strtok(NULL, " \t"), ++i) { + + /* ensure that this next token is a valid type */ + switch (p[0]) { + case 'y': + case 'm': + case 'j': + case 'n': + case 'x': + if (p[1] != '\0') { + warn("field %d on line %d of %s not a valid type", + i, linenum, filename); + die("valid types are a char from [ymnjx=] or =name"); + } + break; + case '=': + break; + default: + warn("field %d on line %d of %s is not a valid type", + i, linenum, filename); + die("valid types are a char from [ymnjx=] or =name"); + } + + /* note that we have a type specific pattern */ + cur->type_match = 1; + + /* ensure that type is not a duplicate */ + if ((p[0] == 'y' && cur->y_type) || + (p[0] == 'm' && cur->m_type) || + (p[0] == 'n' && cur->n_type) || + (p[0] == 'j' && cur->j_type) || + (p[0] == 'x' && cur->x_type) || + (p[0] == '=' && cur->eq_type)) { + warn("only one %c type allowed per line", p[0]); + die("field %d on line %d of %s is a duplicate type", + i, linenum, filename); + } + + /* note what we have seen */ + switch (p[0]) { + case 'y': + cur->y_type = 1; + break; + case 'm': + cur->m_type = 1; + break; + case 'j': + cur->j_type = 1; + break; + case 'n': + cur->n_type = 1; + break; + case 'x': + cur->x_type = 1; + break; + case '=': + cur->eq_type = 1; + if (p[0] == '=' && p[1] != '\0') + cur->epat = xstrdup(p + 1); + break; + } + + /* object if too many fields */ + if (i-3 > TYPECNT) + die("too many fields on line %d of %s", linenum, filename); + } + + /* count another pat element */ + ++(*len); + } + + /* return the pattern array */ + return ret; +} + +/* + * ignore - ignore newsgroups given an ignore list + * + * given: + * grp array of groups + * grplen length of grp array in elements + * igcl array of ignore + * iglen length of igcl array in elements + */ +static void +ignore(grp, grplen, igcl, iglen) + struct grp *grp; /* array of groups */ + int grplen; /* length of grp array in elements */ + struct pat *igcl; /* array of ignore patterns */ + int iglen; /* length of igcl array in elements */ +{ + struct grp *gp; /* current group element being examined */ + struct pat *pp; /* current pattern element being examined */ + int g; /* current group index number */ + int p; /* current pattern index number */ + int ign; /* 1 => ignore this group, 0 => check it */ + int icnt; /* groups ignored */ + int ccnt; /* groups to be checked */ + + /* firewall */ + if (grp == NULL) + die("internal error #5: grp is NULL"); + if (igcl == NULL) + die("internal error $6: igcl is NULL"); + if (D_BUG) + warn("STATUS: determining which groups to ignore"); + + /* if nothing to do, return quickly */ + if (grplen <= 0 || iglen <= 0) { + return; + } + + /* examine each group */ + icnt = 0; + ccnt = 0; + for (g=0; g < grplen; ++g) { + + /* check the group to examine */ + gp = &grp[g]; + if (gp->ignore) { + /* already ignored no need to examine */ + continue; + } + + /* check group against all patterns */ + ign = 0; + for (p=0, pp=igcl; p < iglen; ++p, ++pp) { + + /* if pattern has a specific type, check it first */ + if (pp->type_match) { + + /* specific type required, check for match */ + switch (gp->type[0]) { + case 'y': + if (! pp->y_type) continue; /* pattern does not apply */ + break; + case 'm': + if (! pp->m_type) continue; /* pattern does not apply */ + break; + case 'n': + if (! pp->n_type) continue; /* pattern does not apply */ + break; + case 'j': + if (! pp->j_type) continue; /* pattern does not apply */ + break; + case 'x': + if (! pp->x_type) continue; /* pattern does not apply */ + break; + case '=': + if (! pp->eq_type) continue; /* pattern does not apply */ + if (pp->epat != NULL && !uwildmat(&gp->type[1], pp->epat)) { + /* equiv pattern doesn't match, patt does not apply */ + continue; + } + break; + } + } + + /* perform a match on group name */ + if (uwildmat(gp->name, pp->pat)) { + /* this pattern fully matches, use the ignore value */ + ign = pp->ignore; + } + } + + /* if this group is to be ignored, note it */ + if (ign) { + switch (gp->hostid) { + case HOSTID1: + if (ign_host1_flag) { + gp->ignore |= CHECK_IGNORE; + ++icnt; + } + break; + case HOSTID2: + if (ign_host2_flag) { + gp->ignore |= CHECK_IGNORE; + ++icnt; + } + break; + default: + die("newsgroup %s bad hostid: %d", gp->name, gp->hostid); + } + } else { + ++ccnt; + } + } + if (D_BUG) + warn("STATUS: examined %d groups: %d ignored, %d to be checked", + grplen, icnt, ccnt); +} + +/* + * merge_cmp - qsort compare function for later group merge + * + * given: + * a group a to compare + * b group b to compare + * + * returns: + * >0 a > b + * 0 a == b elements match (fatal error if a and b are different) + * <0 a < b + * + * To speed up group comparison, we compare by the following items listed + * in order of sorting: + * + * group name + * hostid (host1 ahead of host2) + * linenum (active file line number) + */ +static int +merge_cmp(arg_a, arg_b) + const void *arg_a; /* first qsort compare arg */ + const void *arg_b; /* first qsort compare arg */ +{ + const struct grp *a = arg_a; /* group a to compare */ + const struct grp *b = arg_b; /* group b to compare */ + int i; + + /* firewall */ + if (a == b) { + /* we guess this could happen */ + return(0); + } + + /* compare group names */ + i = strcmp(a->name, b->name); + if (i != 0) { + return i; + } + + /* compare hostid's */ + if (a->hostid != b->hostid) { + if (a->hostid > b->hostid) { + return 1; + } else { + return -1; + } + } + + /* compare active line numbers */ + if (a->linenum != b->linenum) { + if (a->linenum > b->linenum) { + return 1; + } else { + return -1; + } + } + + /* two different elements match, this should not happen! */ + die("two internal grp elements match!"); + /*NOTREACHED*/ +} + +/* + * merge_grps - compare groups from both hosts + * + * given: + * grp array of groups + * grplen length of grp array in elements + * host1 name of host with HOSTID1 + * host2 name of host with HOSTID2 + * + * This routine will select which groups to output form a merged active file. + */ +static void +merge_grps(grp, grplen, host1, host2) + struct grp *grp; /* array of groups */ + int grplen; /* length of grp array in elements */ + char *host1; /* name of host with HOSTID1 */ + char *host2; /* name of host with HOSTID2 */ +{ + int cur; /* current group index being examined */ + int nxt; /* next group index being examined */ + int outcnt; /* groups to output */ + int rmcnt; /* groups to remove */ + int h1_probs; /* =type problem groups from host1 */ + int h2_probs; /* =type problem groups from host2 */ + + /* firewall */ + if (grp == NULL) + die("internal error #7: grp is NULL"); + + /* sort groups for the merge */ + if (D_BUG) + warn("STATUS: sorting groups"); + qsort((char *)grp, grplen, sizeof(grp[0]), merge_cmp); + + /* mark =type problem groups from host2, if needed */ + h2_probs = mark_eq_probs(grp, grplen, l_host2_flag, host1, host2); + + /* + * We will walk thru the sorted group array, looking for pairs + * among the groups that we have not already ignored. + * + * If a host has duplicate groups, then the duplicates will + * be next to each other. + * + * If both hosts have the name group, they will be next to each other. + */ + if (D_BUG) + warn("STATUS: merging groups"); + outcnt = 0; + rmcnt = 0; + for (cur=0; cur < grplen; cur=nxt) { + + /* determine the next group index */ + nxt = cur+1; + + /* skip if this group is ignored */ + if (grp[cur].ignore) { + continue; + } + /* assert: cur is not ignored */ + + /* check for duplicate groups from the same host */ + while (nxt < grplen) { + + /* mark the later as a duplicate */ + if (grp[cur].hostid == grp[nxt].hostid && + strcmp(grp[cur].name, grp[nxt].name) == 0) { + grp[nxt].ignore |= ERROR_DUP; + if (!QUIET(grp[cur].hostid)) + warn("lines %d and %d from %s refer to the same group", + grp[cur].linenum, grp[nxt].linenum, + ((grp[cur].hostid == HOSTID1) ? host1 : host2)); + ++nxt; + } else { + break; + } + } + /* assert: cur is not ignored */ + /* assert: cur & nxt are not the same group from the same host */ + + /* if nxt is ignored, look for the next non-ignored group */ + while (nxt < grplen && grp[nxt].ignore) { + ++nxt; + } + /* assert: cur is not ignored */ + /* assert: nxt is not ignored or is beyond end */ + /* assert: cur & nxt are not the same group from the same host */ + + /* case: cur and nxt are the same group */ + if (nxt < grplen && strcmp(grp[cur].name, grp[nxt].name) == 0) { + + /* assert: cur is HOSTID1 */ + if (grp[cur].hostid != HOSTID1) + die("internal error #8: grp[%d].hostid: %d != %d", + cur, grp[cur].hostid, HOSTID1); + + /* + * Both hosts have the same group. Make host1 group type + * match host2. (it may already) + */ + grp[cur].output = 1; + grp[cur].outhi = (host2_hilow_all ? grp[nxt].hi : grp[cur].hi); + grp[cur].outlow = (host2_hilow_all ? grp[nxt].low : grp[cur].low); + grp[cur].outtype = grp[nxt].type; + ++outcnt; + + /* do not process nxt, skip to the one beyond */ + ++nxt; + + /* case: cur and nxt are different groups */ + } else { + + /* + * if cur is host2, then host1 doesn't have it, so output it + */ + if (grp[cur].hostid == HOSTID2) { + grp[cur].output = 1; + grp[cur].outhi = (host2_hilow_newgrp ? grp[cur].hi : DEF_HI); + grp[cur].outlow = (host2_hilow_newgrp ? grp[cur].low : DEF_LOW); + grp[cur].outtype = grp[cur].type; + ++outcnt; + + /* + * If cur is host1, then host2 doesn't have it. + * Mark for removal if -m was not given. + */ + } else { + grp[cur].output = 1; + grp[cur].outhi = grp[cur].hi; + grp[cur].outlow = grp[cur].low; + grp[cur].outtype = grp[cur].type; + if (! m_flag) { + grp[cur].remove = 1; + ++rmcnt; + } + } + + /* if no more groups to examine, we are done */ + if (nxt >= grplen) { + break; + } + } + } + + /* mark =type problem groups from host1, if needed */ + h1_probs = mark_eq_probs(grp, grplen, l_host1_flag, host1, host2); + + /* all done */ + if (D_BUG) { + warn("STATUS: sort-merge passed thru %d groups", outcnt); + warn("STATUS: sort-merge marked %d groups for removal", rmcnt); + warn("STATUS: marked %d =type error groups from host1", h1_probs); + warn("STATUS: marked %d =type error groups from host2", h2_probs); + } + return; +} + +/* + * active_cmp - qsort compare function for active file style output + * + * given: + * a group a to compare + * b group b to compare + * + * returns: + * >0 a > b + * 0 a == b elements match (fatal error if a and b are different) + * <0 a < b + * + * This sort will sort groups so that the lines that will we output + * host1 lines followed by host2 lines. Thus, we will sort by + * the following keys: + * + * hostid (host1 ahead of host2) + * linenum (active file line number) + */ +static int +active_cmp(arg_a, arg_b) + const void *arg_a; /* first qsort compare arg */ + const void *arg_b; /* first qsort compare arg */ +{ + const struct grp *a = arg_a; /* group a to compare */ + const struct grp *b = arg_b; /* group b to compare */ + + /* firewall */ + if (a == b) { + /* we guess this could happen */ + return(0); + } + + /* compare hostid's */ + if (a->hostid != b->hostid) { + if (a->hostid > b->hostid) { + return 1; + } else { + return -1; + } + } + + /* compare active line numbers */ + if (a->linenum != b->linenum) { + if (a->linenum > b->linenum) { + return 1; + } else { + return -1; + } + } + + /* two different elements match, this should not happen! */ + die("two internal grp elements match!"); + /*NOTREACHED*/ +} + +/* + * output_grps - output the result of the merge + * + * given: + * grp array of groups + * grplen length of grp array in elements + */ +static void +output_grps(grp, grplen) + struct grp *grp; /* array of groups */ + int grplen; /* length of grp array in elements */ +{ + int add; /* number of groups added */ + int change; /* number of groups changed */ + int remove; /* number of groups removed */ + int no_new_dir; /* number of new groups with missing/empty dirs */ + int new_dir; /* number of new groupsm, non-empty dir no water chg */ + int water_change; /* number of new groups where hi&low water changed */ + int work; /* adds + changes + removals */ + int same; /* the number of groups the same */ + int ignore; /* host1 newsgroups to ignore */ + int not_done; /* exec errors and execs not performed */ + int rm_cycle; /* 1 => removals only, 0 => adds & changes only */ + int sleep_msg; /* 1 => -o x sleep message was given */ + int top_ignore; /* number of groups ignored because of no top level */ + int restore; /* host1 groups restored due to -o a1 */ + double host1_same; /* % of host1 that is the same */ + int i; + + /* firewall */ + if (grp == NULL) + die("internal error #9: grp is NULL"); + + /* + * If -a1 was given, mark for output any host1 newsgroup that was + * simply ignored due to the -i ign_file. + */ + if (host1_ign_print) { + restore = 0; + for (i=0; i < grplen; ++i) { + if (grp[i].hostid == HOSTID1 && + (grp[i].ignore == CHECK_IGNORE || + grp[i].ignore == CHECK_TYPE || + grp[i].ignore == (CHECK_IGNORE|CHECK_TYPE))) { + /* force group to output and not be ignored */ + grp[i].ignore = 0; + grp[i].output = 1; + grp[i].remove = 0; + grp[i].outhi = grp[i].hi; + grp[i].outlow = grp[i].low; + grp[i].outtype = grp[i].type; + ++restore; + } + } + if (D_BUG) + warn("STATUS: restored %d host1 groups", restore); + } + + /* + * If -T, ignore new top level groups from host2 + */ + if (no_new_hier) { + top_ignore = 0; + for (i=0; i < grplen; ++i) { + /* look at new newsgroups */ + if (grp[i].hostid == HOSTID2 && + grp[i].output != 0 && + new_top_hier(grp[i].name)) { + /* no top level ignore this new group */ + grp[i].ignore |= CHECK_HIER; + grp[i].output = 0; + if (D_BUG) + warn("ignore new newsgroup: %s, new hierarchy", + grp[i].name); + ++top_ignore; + } + } + if (D_SUMMARY) + warn("STATUS: ignored %d new newsgroups due to new hierarchy", + top_ignore); + } + + /* sort by active file order if active style output (-a) */ + if (o_flag == OUTPUT_ACTIVE) { + if (D_BUG) + warn("STATUS: sorting groups in output order"); + qsort((char *)grp, grplen, sizeof(grp[0]), active_cmp); + } + + /* + * Determine the % of lines from host1 active file that remain unchanged + * ignoring any low/high water mark changes. + * + * Determine the number of old groups that will remain the same + * the number of new groups that will be added. + */ + add = 0; + change = 0; + remove = 0; + same = 0; + ignore = 0; + no_new_dir = 0; + new_dir = 0; + water_change = 0; + for (i=0; i < grplen; ++i) { + /* skip non-output ... */ + if (grp[i].output == 0) { + if (grp[i].hostid == HOSTID1) { + ++ignore; + } + continue; + + /* case: group needs removal */ + } else if (grp[i].remove) { + ++remove; + + /* case: group is from host2, so we need a newgroup */ + } else if (grp[i].hostid == HOSTID2) { + ++add; + + /* case: group is from host1, but the type changed */ + } else if (grp[i].type != grp[i].outtype && + strcmp(grp[i].type,grp[i].outtype) != 0) { + ++change; + + /* case: group did not change */ + } else { + ++same; + } + } + work = add+change+remove; + if (same+work+host1_errs <= 0) { + /* no lines, no work, no errors == nothing changed == 100% the same */ + host1_same = (double)100.0; + } else { + /* calculate % unchanged */ + host1_same = (double)100.0 * + ((double)same / (double)(same+work+host1_errs)); + } + if (D_BUG) { + warn("STATUS: same=%d add=%d, change=%d, remove=%d", + same, add, change, remove); + warn("STATUS: ignore=%d, work=%d, err=%d", + ignore, work, host1_errs); + warn("STATUS: same+work+err=%d, host1_same=%.2f%%", + same+work+host1_errs, host1_same); + } + + /* + * Bail out if we too few lines in host1 active file (ignoring + * low/high water mark changes) remaining unchanged. + * + * We define change as: + * + * line errors from host1 active file + * newsgroups to be added to host1 + * newsgroups to be removed from host1 + * newsgroups to be change in host1 + */ + if (host1_same < p_flag) { + warn("HALT: lines unchanged: %.2f%% < min change limit: %.2f%%", + host1_same, p_flag); + warn(" No output or commands executed. Determine if the degree"); + warn(" of changes is okay and re-execute with a lower -p value"); + die(" or with the problem fixed."); + } + + /* + * look at all groups + * + * If we are not producing active file output, we must do removals + * before we do any adds and changes. + * + * We recalculate the work stats in finer detail as well as noting how + * many actions were successful. + */ + add = 0; + change = 0; + remove = 0; + same = 0; + ignore = 0; + work = 0; + not_done = 0; + sleep_msg = 0; + rm_cycle = ((o_flag == OUTPUT_ACTIVE) ? 0 : 1); + do { + for (i=0; i < grplen; ++i) { + + /* if -o Ax, output ignored non-error groups too */ + + /* + * skip non-output ... + * + * but if '-a' and active output mode, then don't skip ignored, + * non-error, non-removed groups from host1 + */ + if (grp[i].output == 0) { + if (grp[i].hostid == HOSTID1) { + ++ignore; + } + continue; + } + + /* case: output active lines */ + if (o_flag == OUTPUT_ACTIVE) { + + /* case: group needs removal */ + if (grp[i].remove) { + ++remove; + ++work; + + /* case: group will be kept */ + } else { + + /* output in active file format */ + printf("%s %s %s %s\n", + grp[i].name, grp[i].outhi, grp[i].outlow, + grp[i].outtype); + + /* if -v level is high enough, do group accounting */ + if (D_IF_SUMM) { + + /* case: group is from host2, so we need a newgroup */ + if (grp[i].hostid == HOSTID2) { + ++add; + ++work; + + /* case: group is from host1, but the type changed */ + } else if (grp[i].type != grp[i].outtype && + strcmp(grp[i].type,grp[i].outtype) != 0) { + ++change; + ++work; + + /* case: group did not change */ + } else { + ++same; + } + } + } + + /* case: output ctlinnd commands */ + } else if (o_flag == OUTPUT_CTLINND) { + + /* case: group needs removal */ + if (grp[i].remove) { + + /* output rmgroup */ + if (rm_cycle) { + printf("ctlinnd rmgroup %s\n", grp[i].name); + ++remove; + ++work; + } + + /* case: group is from host2, so we need a newgroup */ + } else if (grp[i].hostid == HOSTID2) { + + /* output newgroup */ + if (! rm_cycle) { + printf("ctlinnd newgroup %s %s %s\n", + grp[i].name, grp[i].outtype, new_name); + ++add; + ++work; + } + + /* case: group is from host1, but the type changed */ + } else if (grp[i].type != grp[i].outtype && + strcmp(grp[i].type,grp[i].outtype) != 0) { + + /* output changegroup */ + if (! rm_cycle) { + printf("ctlinnd changegroup %s %s\n", + grp[i].name, grp[i].outtype); + ++change; + ++work; + } + + /* case: group did not change */ + } else { + if (! rm_cycle) { + ++same; + } + } + + /* case: exec ctlinnd commands */ + } else if (o_flag == OUTPUT_EXEC || o_flag == OUTPUT_IEXEC) { + + /* warn about sleeping if needed and first time */ + if (o_flag == OUTPUT_EXEC && z_flag > 0 && sleep_msg == 0) { + if (D_SUMMARY) + warn("will sleep %d seconds before each fork/exec", + z_flag); + sleep_msg = 1; + } + + /* case: group needs removal */ + if (grp[i].remove) { + + /* exec rmgroup */ + if (rm_cycle) { + if (D_REPORT && o_flag == OUTPUT_EXEC) + warn("rmgroup %s", grp[i].name); + if (! exec_cmd(o_flag, "rmgroup", + grp[i].name, NULL, NULL)) { + ++not_done; + } else { + ++remove; + ++work; + } + } + + /* case: group is from host2, so we need a newgroup */ + } else if (grp[i].hostid == HOSTID2) { + + /* exec newgroup */ + if (!rm_cycle) { + if (D_REPORT && o_flag == OUTPUT_EXEC) + warn("newgroup %s %s %s", + grp[i].name, grp[i].outtype, new_name); + if (! exec_cmd(o_flag, "newgroup", grp[i].name, + grp[i].outtype, new_name)) { + ++not_done; + } else { + ++add; + ++work; + } + } + + /* case: group is from host1, but the type changed */ + } else if (grp[i].type != grp[i].outtype && + strcmp(grp[i].type,grp[i].outtype) != 0) { + + /* exec changegroup */ + if (!rm_cycle) { + if (D_REPORT && o_flag == OUTPUT_EXEC) + warn("changegroup %s %s", + grp[i].name, grp[i].outtype); + if (! exec_cmd(o_flag, "changegroup", grp[i].name, + grp[i].outtype, NULL)) { + ++not_done; + } else { + ++change; + ++work; + } + } + + /* case: group did not change */ + } else { + if (! rm_cycle) { + ++same; + } + } + } + } + } while (--rm_cycle >= 0); + + /* final accounting, if -v */ + if (D_SUMMARY || (D_IF_SUMM && (work > 0 || not_done > 0))) { + warn("STATUS: %d group(s)", add+remove+change+same); + warn("STATUS: %d group(s)%s added", add, + ((o_flag == OUTPUT_EXEC || o_flag == OUTPUT_IEXEC) ? + "" : " to be")); + warn("STATUS: %d group(s)%s removed", remove, + ((o_flag == OUTPUT_EXEC || o_flag == OUTPUT_IEXEC) ? + "" : " to be")); + warn("STATUS: %d group(s)%s changed", change, + ((o_flag == OUTPUT_EXEC || o_flag == OUTPUT_IEXEC) ? + "" : " to be")); + warn("STATUS: %d group(s) %s the same", same, + ((o_flag == OUTPUT_EXEC || o_flag == OUTPUT_IEXEC) ? + "remain" : "are")); + warn("STATUS: %.2f%% of lines unchanged", host1_same); + warn("STATUS: %d group(s) ignored", ignore); + if (o_flag == OUTPUT_EXEC || o_flag == OUTPUT_IEXEC) + warn("STATUS: %d exec(s) not performed", not_done); + } +} + +/* + * error_mark - mark for removal, error groups from a given host + * + * given: + * grp array of groups + * grplen length of grp array in elements + * hostid host to mark error groups for removal + */ +static void +error_mark(grp, grplen, hostid) + struct grp *grp; /* array of groups */ + int grplen; /* length of grp array in elements */ + int hostid; /* host to mark error groups for removal */ +{ + int i; + int errcnt; + + /* firewall */ + if (grp == NULL) + die("internal error #11: grp is NULL"); + + /* loop thru groups, looking for error groups from a given host */ + errcnt = 0; + for (i=0; i < grplen; ++i) { + + /* skip if not from hostid */ + if (grp[i].hostid != hostid) { + continue; + } + + /* mark for removal if an error group not already removed */ + if (IS_ERROR(grp[i].ignore)) { + + /* mark for removal */ + if (grp[i].output != 1 || grp[i].remove != 1) { + grp[i].output = 1; + grp[i].remove = 1; + } + ++errcnt; + } + } + + /* all done */ + if (D_SUMMARY || (D_IF_SUMM && errcnt > 0)) + warn("STATUS: marked %d error groups for removal", errcnt); + return; +} + +/* + * eq_merge_cmp - qsort compare function for =type group processing + * + * given: + * a =group a to compare + * b =group b to compare + * + * returns: + * >0 a > b + * 0 a == b elements match (fatal error if a and b are different) + * <0 a < b + * + * To speed up group comparison, we compare by the following items listed + * in order of sorting: + * + * skip (non-skipped groups after skipped ones) + * group equiv name + * group name + * hostid (host1 ahead of host2) + * linenum (active file line number) + */ +static int +eq_merge_cmp(arg_a, arg_b) + const void *arg_a; /* first qsort compare arg */ + const void *arg_b; /* first qsort compare arg */ +{ + const struct eqgrp *a = arg_a; /* group a to compare */ + const struct eqgrp *b = arg_b; /* group b to compare */ + int i; + + /* firewall */ + if (a == b) { + /* we guess this could happen */ + return(0); + } + + /* compare skip values */ + if (a->skip != b->skip) { + if (a->skip > b->skip) { + /* a is skipped, b is not */ + return 1; + } else { + /* b is skipped, a is not */ + return -1; + } + } + + /* compare the names the groups are equivalenced to */ + i = strcmp(a->eq, b->eq); + if (i != 0) { + return i; + } + + /* compare the group names themselves */ + i = strcmp(a->g->name, b->g->name); + if (i != 0) { + return i; + } + + /* compare hostid's */ + if (a->g->hostid != b->g->hostid) { + if (a->g->hostid > b->g->hostid) { + return 1; + } else { + return -1; + } + } + + /* compare active line numbers */ + if (a->g->linenum != b->g->linenum) { + if (a->g->linenum > b->g->linenum) { + return 1; + } else { + return -1; + } + } + + /* two different elements match, this should not happen! */ + die("two internal eqgrp elements match!"); +} + +/* + * mark_eq_probs - mark =type groups from a given host that have problems + * + * given: + * grp sorted array of groups + * grplen length of grp array in elements + * hostid host to mark error groups for removal, or NOHOST + * host1 name of host with HOSTID1 + * host2 name of host with HOSTID2 + * + * This function assumes that the grp array has been sorted by name. + */ +static int +mark_eq_probs(grp, grplen, hostid, host1, host2) + struct grp *grp; /* array of groups */ + int grplen; /* length of grp array in elements */ + int hostid; /* host to mark error groups for removal */ + char *host1; /* name of host with HOSTID1 */ + char *host2; /* name of host with HOSTID2 */ +{ + struct eqgrp *eqgrp; /* =type pointer array */ + int eq_cnt; /* number of =type groups from host */ + int new_eq_cnt; /* number of =type groups remaining */ + int missing; /* =type groups equiv to missing groups */ + int cycled; /* =type groups equiv to themselves */ + int chained; /* =type groups in long chain or loop */ + int cmp; /* strcmp of two names */ + int step; /* equiv loop step */ + int i; + int j; + + /* firewall */ + if (grp == NULL) + die("internal error #12: grp is NULL"); + if (hostid == NOHOST) { + /* nothing to detect, nothing else to do */ + return 0; + } + + /* count the =type groups from hostid that are not in error */ + eq_cnt = 0; + for (i=0; i < grplen; ++i) { + if (grp[i].hostid == hostid && + ! IS_ERROR(grp[i].ignore) && + grp[i].type != NULL && + grp[i].type[0] == '=') { + ++eq_cnt; + } + } + if (D_BUG && hostid != NOHOST) + warn("STATUS: host%d has %d =type groups", hostid, eq_cnt); + + /* if no groups, then there is nothing to do */ + if (eq_cnt == 0) { + return 0; + } + + /* setup the =group record array */ + eqgrp = xmalloc(eq_cnt * sizeof(eqgrp[0])); + for (i=0, j=0; i < grplen && j < eq_cnt; ++i) { + if (grp[i].hostid == hostid && + ! IS_ERROR(grp[i].ignore) && + grp[i].type != NULL && + grp[i].type[0] == '=') { + + /* initialize record */ + eqgrp[j].skip = 0; + eqgrp[j].g = &grp[i]; + eqgrp[j].eq = &(grp[i].type[1]); + ++j; + } + } + + /* + * try to resolve =type groups in at least EQ_LOOP equiv links + */ + new_eq_cnt = eq_cnt; + missing = 0; + cycled = 0; + for (step=0; step < EQ_LOOP && new_eq_cnt >= 0; ++step) { + + /* sort the =group record array */ + qsort((char *)eqgrp, eq_cnt, sizeof(eqgrp[0]), eq_merge_cmp); + + /* look for the groups to which =type group point at */ + eq_cnt = new_eq_cnt; + for (i=0, j=0; i < grplen && j < eq_cnt; ++i) { + + /* we will skip any group in error or from the wrong host */ + if (grp[i].hostid != hostid || IS_ERROR(grp[i].ignore)) { + continue; + } + + /* we will skip any skipped eqgrp's */ + if (eqgrp[j].skip) { + /* try the same group against the next eqgrp */ + --i; + ++j; + continue; + } + + /* compare the =name of the eqgrp with the name of the grp */ + cmp = strcmp(grp[i].name, eqgrp[j].eq); + + /* case: this group is pointed at by an eqgrp */ + if (cmp == 0) { + + /* see if we have looped around to the original group name */ + if (strcmp(grp[i].name, eqgrp[j].g->name) == 0) { + + /* note the detected loop */ + if (! QUIET(hostid)) + warn("%s from %s line %d =loops around to itself", + eqgrp[j].g->name, + ((eqgrp[j].g->hostid == HOSTID1) ? host1 : host2), + eqgrp[j].g->linenum); + eqgrp[j].g->ignore |= ERROR_EQLOOP; + + /* the =group is bad, so we don't need to bother with it */ + eqgrp[j].skip = 1; + --new_eq_cnt; + ++cycled; + --i; + ++j; + continue; + } + + /* if =group refers to a valid group, we are done with it */ + if (grp[i].type != NULL && grp[i].type[0] != '=') { + eqgrp[j].skip = 1; + --new_eq_cnt; + /* otherwise note the equiv name */ + } else { + eqgrp[j].eq = &(grp[i].type[1]); + } + --i; + ++j; + + /* case: we missed the =name */ + } else if (cmp > 0) { + + /* mark the eqgrp in error */ + eqgrp[j].g->ignore |= ERROR_NONEQ; + if (! QUIET(hostid)) + warn("%s from %s line %d not equiv to a valid group", + eqgrp[j].g->name, + ((eqgrp[j].g->hostid == HOSTID1) ? host1 : host2), + eqgrp[j].g->linenum); + + /* =group is bad, so we don't need to bother with it anymore */ + eqgrp[j].skip = 1; + --new_eq_cnt; + ++missing; + ++j; + } + } + + /* any remaining non-skipped eqgrps are bad */ + while (j < eq_cnt) { + + /* mark the eqgrp in error */ + eqgrp[j].g->ignore |= ERROR_NONEQ; + if (! QUIET(hostid)) + warn("%s from %s line %d isn't equiv to a valid group", + eqgrp[j].g->name, + ((hostid == HOSTID1) ? host1 : host2), + eqgrp[j].g->linenum); + + /* the =group is bad, so we don't need to bother with it anymore */ + eqgrp[j].skip = 1; + --new_eq_cnt; + ++missing; + ++j; + } + } + + /* note groups that are in a long chain or loop */ + chained = new_eq_cnt; + qsort((char *)eqgrp, eq_cnt, sizeof(eqgrp[0]), eq_merge_cmp); + for (j=0; j < new_eq_cnt; ++j) { + + /* skip if already skipped */ + if (eqgrp[j].skip == 1) { + continue; + } + + /* mark as a long loop group */ + eqgrp[j].g->ignore |= ERROR_LONGLOOP; + if (! QUIET(hostid)) + warn("%s from %s line %d in a long equiv chain or loop > %d", + eqgrp[j].g->name, + ((hostid == HOSTID1) ? host1 : host2), + eqgrp[j].g->linenum, EQ_LOOP); + } + + /* all done */ + if (D_BUG) { + warn("%d =type groups from %s are not equiv to a valid group", + missing, ((hostid == HOSTID1) ? host1 : host2)); + warn("%d =type groups from %s are equiv to themselves", + cycled, ((hostid == HOSTID1) ? host1 : host2)); + warn("%d =type groups from %s are in a long chain or loop > %d", + chained, ((hostid == HOSTID1) ? host1 : host2), EQ_LOOP); + } + free(eqgrp); + return missing+cycled+chained; +} + +/* + * exec_cmd - exec a ctlinnd command in forked process + * + * given: + * mode OUTPUT_EXEC or OUTPUT_IEXEC (interactive mode) + * cmd "changegroup", "newgroup", "rmgroup" + * grp name of group + * type type of group or NULL + * who newgroup creator or NULL + * + * returns: + * 1 exec was performed + * 0 exec was not performed + */ +static int +exec_cmd(mode, cmd, grp, type, who) + int mode; /* OUTPUT_EXEC or OUTPUT_IEXEC (interactive mode) */ + char *cmd; /* changegroup, newgroup or rmgroup */ + char *grp; /* name of group to change, add, remove */ + char *type; /* type of group or NULL */ + char *who; /* newgroup creator or NULL */ +{ + FILE *ch_stream = NULL; /* stream from a child process */ + char buf[BUFSIZ+1]; /* interactive buffer */ + int pid; /* pid of child process */ + int io[2]; /* pair of pipe descriptors */ + int status; /* wait status */ + int exitval; /* exit status of the child */ + char *p; + + /* firewall */ + if (cmd == NULL || grp == NULL) + die("internal error #13, cmd or grp is NULL"); + + /* if interactive, ask the question */ + if (mode == OUTPUT_IEXEC) { + + /* ask the question */ + fflush(stdin); + fflush(stdout); + fflush(stderr); + if (type == NULL) { + printf("%s %s [yn]? ", cmd, grp); + } else if (who == NULL) { + printf("%s %s %s [yn]? ", cmd, grp, type); + } else { + printf("%s %s %s %s [yn]? ", cmd, grp, type, who); + } + fflush(stdout); + buf[0] = '\0'; + buf[BUFSIZ] = '\0'; + p = fgets(buf, BUFSIZ, stdin); + if (p == NULL) { + /* EOF/ERROR on interactive input, silently stop processing */ + exit(43); + } + + /* if non-empty line doesn't start with 'y' or 'Y', skip command */ + if (buf[0] != 'y' && buf[0] != 'Y' && buf[0] != '\n') { + /* indicate nothing was done */ + return 0; + } + } + + /* build a pipe for output from child interactive mode */ + if (mode == OUTPUT_IEXEC) { + if (pipe(io) < 0) + sysdie("pipe create failed"); + + /* setup a fake pipe to /dev/null for non-interactive mode */ + } else { + io[READ_SIDE] = open(DEV_NULL, 0); + if (io[READ_SIDE] < 0) + sysdie("unable to open %s for reading", DEV_NULL); + io[WRITE_SIDE] = open(DEV_NULL, 1); + if (io[WRITE_SIDE] < 0) + sysdie("unable to open %s for writing", DEV_NULL); + } + + /* pause if in non-interactive mode so as to not busy-out the server */ + if (mode == OUTPUT_EXEC && z_flag > 0) { + if (D_BUG) + warn("sleeping %d seconds before fork/exec", z_flag); + /* be sure they know what we are stalling */ + fflush(stderr); + sleep(z_flag); + } + + /* fork the child process */ + fflush(stdout); + fflush(stderr); + pid = fork(); + if (pid == -1) + sysdie("fork failed"); + + /* case: child process */ + if (pid == 0) { + + /* + * prep file descriptors + */ + fclose(stdin); + close(io[READ_SIDE]); + if (dup2(io[WRITE_SIDE], 1) < 0) + sysdie("child: dup of write I/O pipe to stdout failed"); + if (dup2(io[WRITE_SIDE], 2) < 0) + sysdie("child: dup of write I/O pipe to stderr failed"); + + /* exec the ctlinnd command */ + p = concatpath(innconf->pathbin, _PATH_CTLINND); + if (type == NULL) { + execl(p, + CTLINND_NAME, CTLINND_TIME_OUT, cmd, grp, (char *) 0); + } else if (who == NULL) { + execl(p, + CTLINND_NAME, CTLINND_TIME_OUT, cmd, grp, type, (char *) 0); + } else { + execl(p, + CTLINND_NAME, CTLINND_TIME_OUT, cmd, grp, type, who, (char *) 0); + } + + /* child exec failed */ + sysdie("child process exec failed"); + + /* case: parent process */ + } else { + + /* prep file descriptors */ + if (mode != OUTPUT_IEXEC) { + close(io[READ_SIDE]); + } + close(io[WRITE_SIDE]); + + /* print a line from the child, if interactive */ + if (mode == OUTPUT_IEXEC) { + + /* read what the child says */ + buf[0] = '\0'; + buf[BUFSIZ] = '\0'; + ch_stream = fdopen(io[READ_SIDE], "r"); + if (ch_stream == NULL) + sysdie("fdopen of pipe failed"); + p = fgets(buf, BUFSIZ, ch_stream); + + /* print what the child said, if anything */ + if (p != NULL) { + if (buf[strlen(buf)-1] == '\n') + buf[strlen(buf)-1] = '\0'; + warn(" %s", buf); + } + } + + /* look for abnormal child termination/status */ + errno = 0; + while (wait(&status) < 0) { + if (errno == EINTR) { + /* just an interrupt, try to wait again */ + errno = 0; + } else { + sysdie("wait returned -1"); + } + } + if (mode == OUTPUT_IEXEC) { + /* close the pipe now that we are done with reading it */ + fclose(ch_stream); + } + if (WIFSTOPPED(status)) { + warn(" %s %s %s%s%s%s%s stopped", + CTLINND_NAME, cmd, grp, + (type ? "" : " "), (type ? type : ""), + (who ? "" : " "), (who ? who : "")); + /* assume no work was done */ + return 0; + } + if (WIFSIGNALED(status)) { + warn(" %s %s %s%s%s%s%s killed by signal %d", + CTLINND_NAME, cmd, grp, + (type ? "" : " "), (type ? type : ""), + (who ? "" : " "), (who ? who : ""), WTERMSIG(status)); + /* assume no work was done */ + return 0; + } + if (!WIFEXITED(status)) { + warn(" %s %s %s%s%s%s%s returned unknown wait status: 0x%x", + CTLINND_NAME, cmd, grp, + (type ? "" : " "), (type ? type : ""), + (who ? "" : " "), (who ? who : ""), status); + /* assume no work was done */ + return 0; + } + exitval = WEXITSTATUS(status); + if (exitval != 0) { + warn(" %s %s %s%s%s%s%s exited with status: %d", + CTLINND_NAME, cmd, grp, + (type ? "" : " "), (type ? type : ""), + (who ? "" : " "), (who ? who : ""), exitval); + /* assume no work was done */ + return 0; + } + } + + /* all done */ + return 1; +} + +/* + * new_top_hier - determine if the newsgroup represents a new hierarchy + * + * Determine of the newsgroup name is a new hierarchy. + * + * given: + * name name of newsgroup to check + * + * returns: + * false hierarchy already exists + * true hierarchy does not exist, name represents a new hierarchy + * + * NOTE: This function assumes that we are at the top of the news spool. + */ +static int +new_top_hier(name) + char *name; +{ + struct stat statbuf; /* stat of the hierarchy */ + int result; /* return result */ + char *dot; + + /* + * temp change name to just the top level + */ + dot = strchr(name, '.'); + if (dot != NULL) { + *dot = '\0'; + } + + /* + * determine if we can find this top level hierarchy directory + */ + result = !(stat(name, &statbuf) >= 0 && S_ISDIR(statbuf.st_mode)); + /* restore name */ + if (dot != NULL) { + *dot = '.'; + } + + /* + * return the result + */ + return result; +} diff --git a/backends/actsyncd.in b/backends/actsyncd.in new file mode 100644 index 0000000..a88f25d --- /dev/null +++ b/backends/actsyncd.in @@ -0,0 +1,256 @@ +#! /bin/sh +# fixscript will replace this line with code to load innshellvars + +# @(#) $Id: actsyncd.in 6490 2003-10-18 05:49:04Z rra $ +# @(#) Under RCS control in /usr/local/news/src/inn/local/RCS/actsyncd.sh,v +# +# actsyncd - actsync daemon +# +# usage: +# actsyncd [-x] config_file [debug_level [debug_outfmt]] +# +# -x xexec instead of reload +# config_file name of file used to determine how to run actsync +# debug_level force no action and use -v debug_level +# debug_outfmt change -o a1 output to -o debug_outfmt for debug + +# By: Landon Curt Noll chongo@toad.com (chongo was here /\../\) +# +# Copyright (c) Landon Curt Noll, 1993. +# All rights reserved. +# +# Permission to use and modify is hereby granted so long as this +# notice remains. Use at your own risk. No warranty is implied. + +# preset vars +# + +# Our lock file +LOCK="${LOCKS}/LOCK.actsyncd" +# where actsync is located +ACTSYNC="${PATHBIN}/actsync" +# exit value of actsync if unable to get an active file +NOSYNC=127 + +# parse args +# +if [ $# -gt 1 ]; then + case $1 in + -x|-r) shift ;; # no longer relevant + esac +fi +case $# in + 1) cfg="$1"; DEBUG=; DEBUG_FMT=; ;; + 2) cfg="$1"; DEBUG="$2"; DEBUG_FMT=; ;; + 3) cfg="$1"; DEBUG="$2"; DEBUG_FMT="$3"; ;; + *) echo "usage: $0 [-x] config_file [debug_level [debug_outfmt]]" 1>&2; + exit 1 ;; +esac +if [ ! -s "$cfg" ]; then + echo "$0: config_file not found or empty: $ign" 1>&2 + exit 2 +fi + +# parse config_file +# +host="`sed -n -e 's/^host=[ ]*//p' $cfg | tail -1`" +if [ -z "$host" ]; then + echo "$0: no host specified in $cfg" 1>&2 + exit 3 +fi +flags="`sed -n -e 's/^flags=[ ]*//p' $cfg | tail -1`" +if [ -z "$flags" ]; then + echo "$0: no flags specified in $cfg" 1>&2 + exit 4 +fi +ign="`sed -n -e 's/^ignore_file=[ ]*//p' $cfg | tail -1`" +if [ -z "$ign" ]; then + echo "$0: no ignore file specified in $cfg" 1>&2 + exit 5 +fi +ftp="`sed -n -e 's/^ftppath=[ ]*//p' $cfg | tail -1`" +spool="`sed -n -e 's/^spool=[ ]*//p' $cfg | tail -1`" +if [ -z "$spool" ]; then + spool=$SPOOL + #echo "$0: no spool directory specified in $cfg" 1>&2 + #exit 6 +fi +if [ ! -f "$ign" ]; then + ign="${PATHETC}/$ign" +fi +if [ ! -s "$ign" ]; then + echo "$0: ignore_file not found or empty: $ign" 1>&2 + exit 7 +fi + +# force -o c mode (overrides any -o argument in the command line) +# +if [ -z "$DEBUG" ]; then + + # standard actsyncd output mode + flags="$flags -o c" + +# DEBUG processing, if debug_level was given +# +else + + if [ ! -z "$ftp" ]; then + echo "$0: cannot use DEBUG mode with ftp (yet)" >&2 + exit 88; + fi + + # force -v level as needed + flags="$flags -v $DEBUG" + + # force -o level but reject -o x modes + if [ ! -z "$DEBUG_FMT" ]; then + case "$DEBUG_FMT" in + x*) echo "$0: do not use any of the -o x debug_outfmt modes!" 1>&2; + exit 8 ;; + *) flags="$flags -o $DEBUG_FMT" ;; + esac + fi + + # execute actsync directly + echo "DEBUG: will execute $ACTSYNC -i $ign $flags $host" 1>&2 + eval "$ACTSYNC -i $ign $flags $host" + status="$?" + echo "DEBUG: exit status $status" 1>&2 + exit "$status" +fi + +# Lock out others +# +shlock -p $$ -f "${LOCK}" || { + echo "$0: Locked by `cat '${LOCK}'`" 1>&2 + exit 9 +} + +# setup +# +origdir=`pwd` +workdir="${TMPDIR}/actsyncd" +ctlinndcmds="cc_commands" +out="sync.msg" +cleanup="$SED -e 's/^/ /' < $out; cd ${origdir}; rm -rf '$workdir' '$LOCK'" +trap "eval $cleanup; exit 123" 1 2 3 15 + +set -e +rm -rf "$workdir" +mkdir "$workdir" +cd "$workdir" +set +e + +rm -f "$out" +touch "$out" +chmod 0644 "$out" + +# try to sync +# +# Try to sync off of the host. If unable to connect/sync then retry +# up to 9 more times waiting 6 minutes between each try. +# +echo "=-= `date` for $host" >>$out 2>&1 +for loop in 1 2 3 4 5 6 7 8 9 10; do + + # get the active file to compare against + status=0 + case $host in + /*) cp $host active; status=$? ;; + .*) cp $origdir/$host active; status=$? ;; + *) + if [ -z "$ftp" ]; then + port=`expr "$host" : '.*:\(.*\)'` + if [ -n "$port" ]; then + port="-p $port" + host=`expr "$host" : '\(.*\):.*'` + fi + echo "getlist -h $host $port" >>$out + if getlist -h $host $port > active 2>>$out; then + : + else + status=$NOSYNC + fi + else + echo "$GETFTP ftp://$host/$ftp" >>$out + $GETFTP ftp://$host/$ftp >>$out 2>&1 + status=$? + if [ "$status" -ne 0 ]; then + status=$NOSYNC + else + case "$ftp" in + *.gz) + echo "$GZIP -d active" >>$out + if $GZIP -d active >>$out 2>&1; then + : + else + status=1 + fi + ;; + *.Z) + echo "$UNCOMPRESS active" >>$out + if $UNCOMPRESS active >>$out 2>&1; then + : + else + status=1 + fi + ;; + esac + fi + fi + ;; + esac + + if [ "$status" -ne "$NOSYNC" ]; then + + # detect bad status + # + if [ "$status" -ne 0 ]; then + echo "FATAL: `date` for $host exit $status" >>$out + eval $cleanup + exit "$status" + fi + + echo "$ACTSYNC -i $ign $flags ./active" >>$out + eval "$ACTSYNC -i $ign $flags ./active >$ctlinndcmds 2>>$out" + + if [ $? -ne 0 ]; then + echo "FATAL: `date` for $host actsync balked" >>$out + eval $cleanup + exit $? + fi + + if [ ! -s $ctlinndcmds ]; then + echo "No changes need to be made" >>$out + else + echo "=-= `date` for $host, updating active" >>$out + echo "mod-active $ctlinndcmds" >>$out + mod-active $ctlinndcmds >>$out 2>&1 + + if [ $? -ne 0 ]; then + echo "FATAL: `date` for $host mod-active FAILED" >>$out + eval $cleanup + exit 1 + fi + fi + + # normal exit - all done + # + echo "=-= `date` for $host, end" >>$out + eval $cleanup + exit 0 + fi + + # failed to get the remote active file + echo "=-= `date` for $host failed to connect/sync, retrying" >>$out + + # wait 6 minutes + # + sleep 360 +done + +# give up +# +echo "FATAL: `date` for $host failed to connect/sync 10 times" >>$out 2>&1 +eval $cleanup +exit 1 diff --git a/backends/archive.c b/backends/archive.c new file mode 100644 index 0000000..73a7970 --- /dev/null +++ b/backends/archive.c @@ -0,0 +1,653 @@ +/* $Id: archive.c 6138 2003-01-19 04:13:51Z rra $ +** +** Read batchfiles on standard input and archive them. +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include +#include +#include + +#ifdef TM_IN_SYS_TIME +# include +#endif + +#include "inn/innconf.h" +#include "inn/messages.h" +#include "inn/wire.h" +#include "libinn.h" +#include "paths.h" +#include "storage.h" + + +static char *Archive = NULL; +static char *ERRLOG = NULL; + +/* +** Return a YYYYMM string that represents the current year/month +*/ +static char * +DateString(void) +{ + static char ds[10]; + time_t now; + struct tm *x; + + time(&now); + x = localtime(&now); + snprintf(ds, sizeof(ds), "%d%d", x->tm_year + 1900, x->tm_mon + 1); + + return ds; +} + + +/* +** Try to make one directory. Return false on error. +*/ +static bool +MakeDir(char *Name) +{ + struct stat Sb; + + if (mkdir(Name, GROUPDIR_MODE) >= 0) + return true; + + /* See if it failed because it already exists. */ + return stat(Name, &Sb) >= 0 && S_ISDIR(Sb.st_mode); +} + + +/* +** Given an entry, comp/foo/bar/1123, create the directory and all +** parent directories needed. Return false on error. +*/ +static bool +MakeArchiveDirectory(char *Name) +{ + char *p; + char *save; + bool made; + + if ((save = strrchr(Name, '/')) != NULL) + *save = '\0'; + + /* Optimize common case -- parent almost always exists. */ + if (MakeDir(Name)) { + if (save) + *save = '/'; + return true; + } + + /* Try to make each of comp and comp/foo in turn. */ + for (p = Name; *p; p++) + if (*p == '/' && p != Name) { + *p = '\0'; + made = MakeDir(Name); + *p = '/'; + if (!made) { + if (save) + *save = '/'; + return false; + } + } + + made = MakeDir(Name); + if (save) + *save = '/'; + return made; +} + + +/* +** Copy a file. Return false if error. +*/ +static bool +Copy(char *src, char *dest) +{ + FILE *in; + FILE *out; + size_t i; + char *p; + char buff[BUFSIZ]; + + /* Open the output file. */ + if ((out = fopen(dest, "w")) == NULL) { + /* Failed; make any missing directories and try again. */ + if ((p = strrchr(dest, '/')) != NULL) { + if (!MakeArchiveDirectory(dest)) { + syswarn("cannot mkdir for %s", dest); + return false; + } + out = fopen(dest, "w"); + } + if (p == NULL || out == NULL) { + syswarn("cannot open %s for writing", dest); + return false; + } + } + + /* Opening the input file is easier. */ + if ((in = fopen(src, "r")) == NULL) { + syswarn("cannot open %s for reading", src); + fclose(out); + unlink(dest); + return false; + } + + /* Write the data. */ + while ((i = fread(buff, 1, sizeof buff, in)) != 0) + if (fwrite(buff, 1, i, out) != i) { + syswarn("cannot write to %s", dest); + fclose(in); + fclose(out); + unlink(dest); + return false; + } + fclose(in); + + /* Flush and close the output. */ + if (ferror(out) || fflush(out) == EOF) { + syswarn("cannot flush %s", dest); + unlink(dest); + fclose(out); + return false; + } + if (fclose(out) == EOF) { + syswarn("cannot close %s", dest); + unlink(dest); + return false; + } + + return true; +} + + +/* +** Copy an article from memory into a file. +*/ +static bool +CopyArt(ARTHANDLE *art, char *dest, bool Concat) +{ + FILE *out; + const char *p; + char *q, *article; + size_t i; + const char *mode = "w"; + + if (Concat) mode = "a"; + + /* Open the output file. */ + if ((out = fopen(dest, mode)) == NULL) { + /* Failed; make any missing directories and try again. */ + if ((p = strrchr(dest, '/')) != NULL) { + if (!MakeArchiveDirectory(dest)) { + syswarn("cannot mkdir for %s", dest); + return false; + } + out = fopen(dest, mode); + } + if (p == NULL || out == NULL) { + syswarn("cannot open %s for writing", dest); + return false; + } + } + + /* Copy the data. */ + article = xmalloc(art->len); + for (i=0, q=article, p=art->data; pdata+art->len;) { + if (&p[1] < art->data + art->len && p[0] == '\r' && p[1] == '\n') { + p += 2; + *q++ = '\n'; + i++; + if (&p[1] < art->data + art->len && p[0] == '.' && p[1] == '.') { + p += 2; + *q++ = '.'; + i++; + } + if (&p[2] < art->data + art->len && p[0] == '.' && p[1] == '\r' && p[2] == '\n') { + break; + } + } else { + *q++ = *p++; + i++; + } + } + *q++ = '\0'; + + /* Write the data. */ + if (Concat) { + /* Write a separator... */ + fprintf(out, "-----------\n"); + } + if (fwrite(article, i, 1, out) != 1) { + syswarn("cannot write to %s", dest); + fclose(out); + if (!Concat) unlink(dest); + free(article); + return false; + } + free(article); + + /* Flush and close the output. */ + if (ferror(out) || fflush(out) == EOF) { + syswarn("cannot flush %s", dest); + if (!Concat) unlink(dest); + fclose(out); + return false; + } + if (fclose(out) == EOF) { + syswarn("cannot close %s", dest); + if (!Concat) unlink(dest); + return false; + } + + return true; +} + + +/* +** Write an index entry. Ignore I/O errors; our caller checks for them. +*/ +static void +WriteArtIndex(ARTHANDLE *art, char *ShortName) +{ + const char *p; + int i; + char Subject[BUFSIZ]; + char MessageID[BUFSIZ]; + + Subject[0] = '\0'; /* default to null string */ + p = wire_findheader(art->data, art->len, "Subject"); + if (p != NULL) { + for (i=0; *p != '\r' && *p != '\n' && *p != '\0'; i++) { + Subject[i] = *p++; + } + Subject[i] = '\0'; + } + + MessageID[0] = '\0'; /* default to null string */ + p = wire_findheader(art->data, art->len, "Message-ID"); + if (p != NULL) { + for (i=0; *p != '\r' && *p != '\n' && *p != '\0'; i++) { + MessageID[i] = *p++; + } + MessageID[i] = '\0'; + } + + printf("%s %s %s\n", + ShortName, + MessageID[0] ? MessageID : "", + Subject[0] ? Subject : ""); +} + + +/* +** Crack an Xref line apart into separate strings, each of the form "ng:artnum". +** Return in "lenp" the number of newsgroups found. +** +** This routine blatantly stolen from tradspool.c +*/ +static char ** +CrackXref(const char *xref, unsigned int *lenp) { + char *p; + char **xrefs; + char *q; + unsigned int len, xrefsize; + + len = 0; + xrefsize = 5; + xrefs = xmalloc(xrefsize * sizeof(char *)); + + /* skip pathhost */ + if ((p = strchr(xref, ' ')) == NULL) { + warn("cannot find pathhost in Xref header"); + return NULL; + } + /* skip next spaces */ + for (p++; *p == ' ' ; p++) ; + while (true) { + /* check for EOL */ + /* shouldn't ever hit null w/o hitting a \r\n first, but best to be paranoid */ + if (*p == '\n' || *p == '\r' || *p == 0) { + /* hit EOL, return. */ + *lenp = len; + return xrefs; + } + /* skip to next space or EOL */ + for (q=p; *q && *q != ' ' && *q != '\n' && *q != '\r' ; ++q) ; + + xrefs[len] = xstrndup(p, q - p); + + if (++len == xrefsize) { + /* grow xrefs if needed. */ + xrefsize *= 2; + xrefs = xrealloc(xrefs, xrefsize * sizeof(char *)); + } + + p = q; + /* skip spaces */ + for ( ; *p == ' ' ; p++) ; + } +} + + +/* +** Crack an groups pattern parameter apart into separate strings +** Return in "lenp" the number of patterns found. +*/ +static char ** +CrackGroups(char *group, unsigned int *lenp) { + char *p; + char **groups; + char *q; + unsigned int len, grpsize; + + len = 0; + grpsize = 5; + groups = xmalloc(grpsize * sizeof(char *)); + + /* skip leading spaces */ + for (p=group; *p == ' ' ; p++) ; + while (true) { + /* check for EOL */ + /* shouldn't ever hit null w/o hitting a \r\n first, but best to be paranoid */ + if (*p == '\n' || *p == '\r' || *p == 0) { + /* hit EOL, return. */ + *lenp = len; + return groups; + } + /* skip to next comma, space, or EOL */ + for (q=p; *q && *q != ',' && *q != ' ' && *q != '\n' && *q != '\r' ; ++q) ; + + groups[len] = xstrndup(p, q - p); + + if (++len == grpsize) { + /* grow groups if needed. */ + grpsize *= 2; + groups = xrealloc(groups, grpsize * sizeof(char *)); + } + + p = q; + /* skip commas and spaces */ + for ( ; *p == ' ' || *p == ',' ; p++) ; + } +} + + +int +main(int ac, char *av[]) +{ + char *Name; + char *p; + FILE *F; + int i; + bool Flat; + bool Redirect; + bool Concat; + char *Index; + char buff[BUFSIZ]; + char *spool; + char dest[BUFSIZ]; + char **groups, *q, *ng; + char **xrefs; + const char *xrefhdr; + ARTHANDLE *art; + TOKEN token; + unsigned int numgroups, numxrefs; + int j; + char *base = NULL; + bool doit; + + /* First thing, set up our identity. */ + message_program_name = "archive"; + + /* Set defaults. */ + if (!innconf_read(NULL)) + exit(1); + Concat = false; + Flat = false; + Index = NULL; + Redirect = true; + umask(NEWSUMASK); + ERRLOG = concatpath(innconf->pathlog, _PATH_ERRLOG); + Archive = innconf->patharchive; + groups = NULL; + numgroups = 0; + + /* Parse JCL. */ + while ((i = getopt(ac, av, "a:cfi:p:r")) != EOF) + switch (i) { + default: + die("usage error"); + break; + case 'a': + Archive = optarg; + break; + case 'c': + Flat = true; + Concat = true; + break; + case 'f': + Flat = true; + break; + case 'i': + Index = optarg; + break; + case 'p': + groups = CrackGroups(optarg, &numgroups); + break; + case 'r': + Redirect = false; + break; + } + + /* Parse arguments -- at most one, the batchfile. */ + ac -= optind; + av += optind; + if (ac > 2) + die("usage error"); + + /* Do file redirections. */ + if (Redirect) + freopen(ERRLOG, "a", stderr); + if (ac == 1 && freopen(av[0], "r", stdin) == NULL) + sysdie("cannot open %s for input", av[0]); + if (Index && freopen(Index, "a", stdout) == NULL) + sysdie("cannot open %s for output", Index); + + /* Go to where the action is. */ + if (chdir(innconf->patharticles) < 0) + sysdie("cannot chdir to %s", innconf->patharticles); + + /* Set up the destination. */ + strcpy(dest, Archive); + Name = dest + strlen(dest); + *Name++ = '/'; + + if (!SMinit()) + die("cannot initialize storage manager: %s", SMerrorstr); + + /* Read input. */ + while (fgets(buff, sizeof buff, stdin) != NULL) { + if ((p = strchr(buff, '\n')) == NULL) { + warn("skipping %.40s: too long", buff); + continue; + } + *p = '\0'; + if (buff[0] == '\0' || buff[0] == '#') + continue; + + /* Check to see if this is a token... */ + if (IsToken(buff)) { + /* Get a copy of the article. */ + token = TextToToken(buff); + if ((art = SMretrieve(token, RETR_ALL)) == NULL) { + warn("cannot retrieve %s", buff); + continue; + } + + /* Determine groups from the Xref header */ + xrefhdr = wire_findheader(art->data, art->len, "Xref"); + if (xrefhdr == NULL) { + warn("cannot find Xref header"); + SMfreearticle(art); + continue; + } + + if ((xrefs = CrackXref(xrefhdr, &numxrefs)) == NULL || numxrefs == 0) { + warn("bogus Xref header"); + SMfreearticle(art); + continue; + } + + /* Process each newsgroup... */ + if (base) { + free(base); + base = NULL; + } + for (i=0; (unsigned)i 0) { + *p = '\0'; + ng = xrefs[i]; + doit = false; + for (j=0; (unsigned)jpathoutgoing, "archive"); + else if (*p == '/') + spool = concat(p, ".bch", (char *) 0); + else + spool = concat(innconf->pathoutgoing, "/", p, ".bch", (char *) 0); + if ((F = xfopena(spool)) == NULL) + sysdie("cannot spool to %s", spool); + + /* Write the rest of stdin to the spool file. */ + i = 0; + if (fprintf(F, "%s\n", buff) == EOF) { + syswarn("cannot start spool"); + i = 1; + } + while (fgets(buff, sizeof buff, stdin) != NULL) + if (fputs(buff, F) == EOF) { + syswarn("cannot write to spool"); + i = 1; + break; + } + if (fclose(F) == EOF) { + syswarn("cannot close spool"); + i = 1; + } + + /* If we had a named input file, try to rename the spool. */ + if (p != NULL && rename(spool, av[0]) < 0) { + syswarn("cannot rename spool"); + i = 1; + } + + exit(i); + /* NOTREACHED */ +} diff --git a/backends/batcher.c b/backends/batcher.c new file mode 100644 index 0000000..0595778 --- /dev/null +++ b/backends/batcher.c @@ -0,0 +1,428 @@ +/* $Id: batcher.c 6762 2004-05-17 04:24:53Z rra $ +** +** Read batchfiles on standard input and spew out batches. +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include +#include +#include +#include +#include + +#include "inn/innconf.h" +#include "inn/messages.h" +#include "inn/timer.h" +#include "libinn.h" +#include "paths.h" +#include "storage.h" + + +/* +** Global variables. +*/ +static bool BATCHopen; +static bool STATprint; +static double STATbegin; +static double STATend; +static char *Host; +static char *InitialString; +static char *Input; +static char *Processor; +static int ArtsInBatch; +static int ArtsWritten; +static int BATCHcount; +static int MaxBatches; +static int BATCHstatus; +static long BytesInBatch = 60 * 1024; +static long BytesWritten; +static long MaxArts; +static long MaxBytes; +static sig_atomic_t GotInterrupt; +static const char *Separator = "#! rnews %ld"; +static char *ERRLOG; + +/* +** Start a batch process. +*/ +static FILE * +BATCHstart(void) +{ + FILE *F; + char buff[SMBUF]; + + if (Processor && *Processor) { + snprintf(buff, sizeof(buff), Processor, Host); + F = popen(buff, "w"); + if (F == NULL) + return NULL; + } + else + F = stdout; + BATCHopen = true; + BATCHcount++; + return F; +} + + +/* +** Close a batch, return exit status. +*/ +static int +BATCHclose(FILE *F) +{ + BATCHopen = false; + if (F == stdout) + return fflush(stdout) == EOF ? 1 : 0; + return pclose(F); +} + + +/* +** Update the batch file and exit. +*/ +static void +RequeueAndExit(off_t Cookie, char *line, long BytesInArt) +{ + static char LINE1[] = "batcher %s times user %.3f system %.3f elapsed %.3f"; + static char LINE2[] ="batcher %s stats batches %d articles %d bytes %ld"; + char *spool; + char buff[BIG_BUFFER]; + int i; + FILE *F; + double usertime; + double systime; + + /* Do statistics. */ + STATend = TMRnow_double(); + if (GetResourceUsage(&usertime, &systime) < 0) { + usertime = 0; + systime = 0; + } + + if (STATprint) { + printf(LINE1, Host, usertime, systime, STATend - STATbegin); + printf("\n"); + printf(LINE2, Host, BATCHcount, ArtsWritten, BytesWritten); + printf("\n"); + } + + syslog(L_NOTICE, LINE1, Host, usertime, systime, STATend - STATbegin); + syslog(L_NOTICE, LINE2, Host, BATCHcount, ArtsWritten, BytesWritten); + + /* Last batch exit okay? */ + if (BATCHstatus == 0) { + if (feof(stdin) && Cookie != -1) { + /* Yes, and we're all done -- remove input and exit. */ + fclose(stdin); + if (Input) + unlink(Input); + exit(0); + } + } + + /* Make an appropriate spool file. */ + if (Input == NULL) + spool = concatpath(innconf->pathoutgoing, Host); + else + spool = concat(Input, ".bch", (char *) 0); + if ((F = xfopena(spool)) == NULL) + sysdie("%s cannot open %s", Host, spool); + + /* If we can back up to where the batch started, do so. */ + i = 0; + if (Cookie != -1 && fseeko(stdin, Cookie, SEEK_SET) == -1) { + syswarn("%s cannot seek", Host); + i = 1; + } + + /* Write the line we had; if the fseeko worked, this will be an + * extra line, but that's okay. */ + if (line && fprintf(F, "%s %ld\n", line, BytesInArt) == EOF) { + syswarn("%s cannot write spool", Host); + i = 1; + } + + /* Write rest of stdin to spool. */ + while (fgets(buff, sizeof buff, stdin) != NULL) + if (fputs(buff, F) == EOF) { + syswarn("%s cannot write spool", Host); + i = 1; + break; + } + if (fclose(F) == EOF) { + syswarn("%s cannot close spool", Host); + i = 1; + } + + /* If we had a named input file, try to rename the spool. */ + if (Input != NULL && rename(spool, Input) < 0) { + syswarn("%s cannot rename spool", Host); + i = 1; + } + + exit(i); + /* NOTREACHED */ +} + + +/* +** Mark that we got interrupted. +*/ +static RETSIGTYPE +CATCHinterrupt(int s) +{ + GotInterrupt = true; + + /* Let two interrupts kill us. */ + xsignal(s, SIG_DFL); +} + + +int +main(int ac, char *av[]) +{ + bool Redirect; + FILE *F; + const char *AltSpool; + char *p; + char *data; + char line[BIG_BUFFER]; + char buff[BIG_BUFFER]; + int BytesInArt; + long BytesInCB; + off_t Cookie; + size_t datasize; + int i; + int ArtsInCB; + int length; + TOKEN token; + ARTHANDLE *art; + char *artdata; + + /* Set defaults. */ + openlog("batcher", L_OPENLOG_FLAGS | LOG_PID, LOG_INN_PROG); + message_program_name = "batcher"; + if (!innconf_read(NULL)) + exit(1); + AltSpool = NULL; + Redirect = true; + umask(NEWSUMASK); + ERRLOG = concatpath(innconf->pathlog, _PATH_ERRLOG); + + /* Parse JCL. */ + while ((i = getopt(ac, av, "a:A:b:B:i:N:p:rs:S:v")) != EOF) + switch (i) { + default: + die("usage error"); + break; + case 'a': + ArtsInBatch = atoi(optarg); + break; + case 'A': + MaxArts = atol(optarg); + break; + case 'b': + BytesInBatch = atol(optarg); + break; + case 'B': + MaxBytes = atol(optarg); + break; + case 'i': + InitialString = optarg; + break; + case 'N': + MaxBatches = atoi(optarg); + break; + case 'p': + Processor = optarg; + break; + case 'r': + Redirect = false; + break; + case 's': + Separator = optarg; + break; + case 'S': + AltSpool = optarg; + break; + case 'v': + STATprint = true; + break; + } + if (MaxArts && ArtsInBatch == 0) + ArtsInBatch = MaxArts; + if (MaxBytes && BytesInBatch == 0) + BytesInBatch = MaxBytes; + + /* Parse arguments. */ + ac -= optind; + av += optind; + if (ac != 1 && ac != 2) + die("usage error"); + Host = av[0]; + if ((Input = av[1]) != NULL) { + if (Input[0] != '/') + Input = concatpath(innconf->pathoutgoing, av[1]); + if (freopen(Input, "r", stdin) == NULL) + sysdie("%s cannot open %s", Host, Input); + } + + if (Redirect) + freopen(ERRLOG, "a", stderr); + + /* Go to where the articles are. */ + if (chdir(innconf->patharticles) < 0) + sysdie("%s cannot chdir to %s", Host, innconf->patharticles); + + /* Set initial counters, etc. */ + datasize = 8 * 1024; + data = xmalloc(datasize); + BytesInCB = 0; + ArtsInCB = 0; + Cookie = -1; + GotInterrupt = false; + xsignal(SIGHUP, CATCHinterrupt); + xsignal(SIGINT, CATCHinterrupt); + xsignal(SIGTERM, CATCHinterrupt); + /* xsignal(SIGPIPE, CATCHinterrupt); */ + STATbegin = TMRnow_double(); + + SMinit(); + F = NULL; + while (fgets(line, sizeof line, stdin) != NULL) { + /* Record line length in case we do an ftello. Not portable to + * systems with non-Unix file formats. */ + length = strlen(line); + Cookie = ftello(stdin) - length; + + /* Get lines like "name size" */ + if ((p = strchr(line, '\n')) == NULL) { + warn("%s skipping %.40s: too long", Host, line); + continue; + } + *p = '\0'; + if (line[0] == '\0' || line[0] == '#') + continue; + if ((p = strchr(line, ' ')) != NULL) { + *p++ = '\0'; + /* Try to be forgiving of bad input. */ + BytesInArt = CTYPE(isdigit, (int)*p) ? atol(p) : -1; + } + else + BytesInArt = -1; + + /* Strip of leading spool pathname. */ + if (line[0] == '/' + && line[strlen(innconf->patharticles)] == '/' + && strncmp(line, innconf->patharticles, strlen(innconf->patharticles)) == 0) + p = line + strlen(innconf->patharticles) + 1; + else + p = line; + + /* Open the file. */ + if (IsToken(p)) { + token = TextToToken(p); + if ((art = SMretrieve(token, RETR_ALL)) == NULL) { + if ((SMerrno != SMERR_NOENT) && (SMerrno != SMERR_UNINIT)) + warn("%s skipping %.40s: %s", Host, p, SMerrorstr); + continue; + } + BytesInArt = -1; + artdata = FromWireFmt(art->data, art->len, (size_t *)&BytesInArt); + SMfreearticle(art); + } else { + warn("%s skipping %.40s: not token", Host, p); + continue; + } + + /* Have an open article, do we need to open a batch? This code + * is here (rather then up before the while loop) so that we + * can avoid sending an empty batch. The goto makes the code + * a bit more clear. */ + if (F == NULL) { + if (GotInterrupt) { + RequeueAndExit(Cookie, (char *)NULL, 0L); + } + if ((F = BATCHstart()) == NULL) { + syswarn("%s cannot start batch %d", Host, BATCHcount); + break; + } + if (InitialString && *InitialString) { + fprintf(F, "%s\n", InitialString); + BytesInCB += strlen(InitialString) + 1; + BytesWritten += strlen(InitialString) + 1; + } + goto SendIt; + } + + /* We're writing a batch, see if adding the current article + * would exceed the limits. */ + if ((ArtsInBatch > 0 && ArtsInCB + 1 >= ArtsInBatch) + || (BytesInBatch > 0 && BytesInCB + BytesInArt >= BytesInBatch)) { + if ((BATCHstatus = BATCHclose(F)) != 0) { + if (BATCHstatus == -1) + syswarn("%s cannot close batch %d", Host, BATCHcount); + else + syswarn("%s batch %d exit status %d", Host, BATCHcount, + BATCHstatus); + break; + } + ArtsInCB = 0; + BytesInCB = 0; + + /* See if we can start a new batch. */ + if ((MaxBatches > 0 && BATCHcount >= MaxBatches) + || (MaxBytes > 0 && BytesWritten + BytesInArt >= MaxBytes) + || (MaxArts > 0 && ArtsWritten + 1 >= MaxArts)) { + break; + } + + if (GotInterrupt) { + RequeueAndExit(Cookie, (char *)NULL, 0L); + } + + if ((F = BATCHstart()) == NULL) { + syswarn("%s cannot start batch %d", Host, BATCHcount); + break; + } + } + + SendIt: + /* Now we can start to send the article! */ + if (Separator && *Separator) { + snprintf(buff, sizeof(buff), Separator, BytesInArt); + BytesInCB += strlen(buff) + 1; + BytesWritten += strlen(buff) + 1; + if (fprintf(F, "%s\n", buff) == EOF || ferror(F)) { + syswarn("%s cannot write separator", Host); + break; + } + } + + /* Write the article. In case of interrupts, retry the read but not + the fwrite because we can't check that reliably and portably. */ + if ((fwrite(artdata, 1, BytesInArt, F) != BytesInArt) || ferror(F)) + break; + + /* Update the counts. */ + BytesInCB += BytesInArt; + BytesWritten += BytesInArt; + ArtsInCB++; + ArtsWritten++; + + if (GotInterrupt) { + Cookie = -1; + BATCHstatus = BATCHclose(F); + RequeueAndExit(Cookie, line, BytesInArt); + } + } + + if (BATCHopen) + BATCHstatus = BATCHclose(F); + RequeueAndExit(Cookie, NULL, 0); + + return 0; +} diff --git a/backends/buffchan.c b/backends/buffchan.c new file mode 100644 index 0000000..34c9e5b --- /dev/null +++ b/backends/buffchan.c @@ -0,0 +1,483 @@ +/* $Id: buffchan.c 6163 2003-01-19 22:56:34Z rra $ +** +** Buffered file exploder for innd. +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include +#include +#include + +#include "inn/innconf.h" +#include "inn/messages.h" +#include "inn/qio.h" +#include "libinn.h" +#include "paths.h" +#include "map.h" + +/* +** Hash functions for hashing sitenames. +*/ +#define SITE_HASH(Name, p, j) \ + for (p = Name, j = 0; *p; ) j = (j << 5) + j + *p++ +#define SITE_SIZE 128 +#define SITE_BUCKET(j) &SITEtable[j & (SITE_SIZE - 1)] + + +/* +** Entry for a single active site. +*/ +typedef struct _SITE { + bool Dropped; + const char *Name; + int CloseLines; + int FlushLines; + time_t LastFlushed; + time_t LastClosed; + int CloseSeconds; + int FlushSeconds; + FILE *F; + const char *Filename; + char *Buffer; +} SITE; + + +/* +** Site hashtable bucket. +*/ +typedef struct _SITEHASH { + int Size; + int Used; + SITE *Sites; +} SITEHASH; + + +/* Global variables. */ +static char *Format; +static const char *Map; +static int BufferMode; +static int CloseEvery; +static int FlushEvery; +static int CloseSeconds; +static int FlushSeconds; +static sig_atomic_t GotInterrupt; +static SITEHASH SITEtable[SITE_SIZE]; +static TIMEINFO Now; + + +/* +** Set up the site information. Basically creating empty buckets. +*/ +static void +SITEsetup(void) +{ + SITEHASH *shp; + + for (shp = SITEtable; shp < ARRAY_END(SITEtable); shp++) { + shp->Size = 3; + shp->Sites = xmalloc(shp->Size * sizeof(SITE)); + shp->Used = 0; + } +} + + +/* +** Close a site +*/ +static void +SITEclose(SITE *sp) +{ + FILE *F; + + if ((F = sp->F) != NULL) { + if (fflush(F) == EOF || ferror(F) + || fchmod((int)fileno(F), 0664) < 0 + || fclose(F) == EOF) + syswarn("%s cannot close %s", sp->Name, sp->Filename); + sp->F = NULL; + } +} + +/* +** Close all open sites. +*/ +static void +SITEcloseall(void) +{ + SITEHASH *shp; + SITE *sp; + int i; + + for (shp = SITEtable; shp < ARRAY_END(SITEtable); shp++) + for (sp = shp->Sites, i = shp->Used; --i >= 0; sp++) + SITEclose(sp); +} + + +/* +** Open the file for a site. +*/ +static void SITEopen(SITE *sp) +{ + int e; + + if ((sp->F = xfopena(sp->Filename)) == NULL + && ((e = errno) != EACCES || chmod(sp->Filename, 0644) < 0 + || (sp->F = xfopena(sp->Filename)) == NULL)) { + syswarn("%s cannot fopen %s", sp->Name, sp->Filename); + if ((sp->F = fopen("/dev/null", "w")) == NULL) + /* This really should not happen. */ + sysdie("%s cannot fopen /dev/null", sp->Name); + } + else if (fchmod((int)fileno(sp->F), 0444) < 0) + syswarn("%s cannot fchmod %s", sp->Name, sp->Filename); + + if (BufferMode != '\0') + setbuf(sp->F, sp->Buffer); + + /* Reset all counters. */ + sp->FlushLines = 0; + sp->CloseLines = 0; + sp->LastFlushed = Now.time; + sp->LastClosed = Now.time; + sp->Dropped = false; +} + + +/* +** Find a site, possibly create if not found. +*/ +static SITE * +SITEfind(char *Name, bool CanCreate) +{ + char *p; + int i; + unsigned int j; + SITE *sp; + SITEHASH *shp; + char c; + char buff[BUFSIZ]; + + /* Look for site in the hash table. */ + SITE_HASH(Name, p, j); + shp = SITE_BUCKET(j); + for (c = *Name, sp = shp->Sites, i = shp->Used; --i >= 0; sp++) + if (c == sp->Name[0] && strcasecmp(Name, sp->Name) == 0) + return sp; + if (!CanCreate) + return NULL; + + /* Adding a new site -- grow hash bucket if we need to. */ + if (shp->Used == shp->Size - 1) { + shp->Size *= 2; + shp->Sites = xrealloc(shp->Sites, shp->Size * sizeof(SITE)); + } + sp = &shp->Sites[shp->Used++]; + + /* Fill in the structure for the new site. */ + sp->Name = xstrdup(Name); + snprintf(buff, sizeof(buff), Format, Map ? MAPname(Name) : sp->Name); + sp->Filename = xstrdup(buff); + if (BufferMode == 'u') + sp->Buffer = NULL; + else if (BufferMode == 'b') + sp->Buffer = xmalloc(BUFSIZ); + SITEopen(sp); + + return sp; +} + + +/* +** Flush a site -- close and re-open the file. +*/ +static void +SITEflush(SITE *sp) +{ + FILE *F; + + if ((F = sp->F) != NULL) { + if (fflush(F) == EOF || ferror(F) + || fchmod((int)fileno(F), 0664) < 0 + || fclose(F) == EOF) + syswarn("%s cannot close %s", sp->Name, sp->Filename); + sp->F = NULL; + } + if (!sp->Dropped) + SITEopen(sp); +} + + +/* +** Flush all open sites. +*/ +static void +SITEflushall(void) +{ + SITEHASH *shp; + SITE *sp; + int i; + + for (shp = SITEtable; shp < ARRAY_END(SITEtable); shp++) + for (sp = shp->Sites, i = shp->Used; --i >= 0; sp++) + SITEflush(sp); +} + + +/* +** Write data to a site. +*/ +static void +SITEwrite(char *name, char *text, size_t len) +{ + SITE *sp; + + sp = SITEfind(name, true); + if (sp->F == NULL) + SITEopen(sp); + + if (fwrite(text, 1, len, sp->F) != len) + syswarn("%s cannot write", sp->Name); + + /* Bump line count; see if time to close or flush. */ + if (CloseEvery && ++(sp->CloseLines) >= CloseEvery) { + SITEflush(sp); + return; + } + if (CloseSeconds && sp->LastClosed + CloseSeconds < Now.time) { + SITEflush(sp); + return; + } + if (FlushEvery && ++(sp->FlushLines) >= FlushEvery) { + if (fflush(sp->F) == EOF || ferror(sp->F)) + syswarn("%s cannot flush %s", sp->Name, sp->Filename); + sp->LastFlushed = Now.time; + sp->FlushLines = 0; + } + else if (FlushSeconds && sp->LastFlushed + FlushSeconds < Now.time) { + if (fflush(sp->F) == EOF || ferror(sp->F)) + syswarn("%s cannot flush %s", sp->Name, sp->Filename); + sp->LastFlushed = Now.time; + sp->FlushLines = 0; + } +} + + +/* +** Handle a command message. +*/ +static void +Process(char *p) +{ + SITE *sp; + + if (*p == 'b' && strncmp(p, "begin", 5) == 0) + /* No-op. */ + return; + + if (*p == 'f' && strncmp(p, "flush", 5) == 0) { + for (p += 5; ISWHITE(*p); p++) + continue; + if (*p == '\0') + SITEflushall(); + else if ((sp = SITEfind(p, false)) != NULL) + SITEflush(sp); + /*else + fprintf(stderr, "buffchan flush %s unknown site\n", p);*/ + return; + } + + if (*p == 'd' && strncmp(p, "drop", 4) == 0) { + for (p += 4; ISWHITE(*p); p++) + continue; + if (*p == '\0') + SITEcloseall(); + else if ((sp = SITEfind(p, false)) == NULL) + warn("drop %s unknown site", p); + else { + SITEclose(sp); + sp->Dropped = true; + } + return; + } + + if (*p == 'r' && strncmp(p, "readmap", 7) == 0) { + MAPread(Map); + return; + } + + /* Other command messages -- ignored. */ + warn("unknown message %s", p); +} + + +/* +** Mark that we got a signal; let two signals kill us. +*/ +static RETSIGTYPE +CATCHinterrupt(int s) +{ + GotInterrupt = true; + xsignal(s, SIG_DFL); +} + + +int +main(int ac, char *av[]) +{ + QIOSTATE *qp; + int i; + int Fields; + char *p; + char *next; + char *line; + char *Directory; + bool Redirect; + FILE *F; + char *ERRLOG; + + /* First thing, set up our identity. */ + message_program_name = "buffchan"; + + /* Set defaults. */ + if (!innconf_read(NULL)) + exit(1); + ERRLOG = concatpath(innconf->pathlog, _PATH_ERRLOG); + Directory = NULL; + Fields = 1; + Format = NULL; + Redirect = true; + GotInterrupt = false; + umask(NEWSUMASK); + + xsignal(SIGHUP, CATCHinterrupt); + xsignal(SIGINT, CATCHinterrupt); + xsignal(SIGQUIT, CATCHinterrupt); + xsignal(SIGPIPE, CATCHinterrupt); + xsignal(SIGTERM, CATCHinterrupt); + xsignal(SIGALRM, CATCHinterrupt); + + /* Parse JCL. */ + while ((i = getopt(ac, av, "bc:C:d:f:l:L:m:p:rs:u")) != EOF) + switch (i) { + default: + die("usage error"); + break; + case 'b': + case 'u': + BufferMode = i; + break; + case 'c': + CloseEvery = atoi(optarg); + break; + case 'C': + CloseSeconds = atoi(optarg); + break; + case 'd': + Directory = optarg; + if (Format == NULL) + Format =xstrdup("%s"); + break; + case 'f': + Fields = atoi(optarg); + break; + case 'l': + FlushEvery = atoi(optarg); + break; + case 'L': + FlushSeconds = atoi(optarg); + break; + case 'm': + Map = optarg; + MAPread(Map); + break; + case 'p': + if ((F = fopen(optarg, "w")) == NULL) + sysdie("cannot fopen %s", optarg); + fprintf(F, "%ld\n", (long)getpid()); + if (ferror(F) || fclose(F) == EOF) + sysdie("cannot fclose %s", optarg); + break; + case 'r': + Redirect = false; + break; + case 's': + Format = optarg; + break; + } + ac -= optind; + av += optind; + if (ac) + die("usage error"); + + /* Do some basic set-ups. */ + if (Redirect) + freopen(ERRLOG, "a", stderr); + if (Format == NULL) { + Format = concatpath(innconf->pathoutgoing, "%s"); + } + if (Directory && chdir(Directory) < 0) + sysdie("cannot chdir to %s", Directory); + SITEsetup(); + + /* Read input. */ + for (qp = QIOfdopen((int)fileno(stdin)); !GotInterrupt ; ) { + if ((line = QIOread(qp)) == NULL) { + if (QIOerror(qp)) { + syswarn("cannot read"); + break; + } + if (QIOtoolong(qp)) { + warn("long line"); + QIOread(qp); + continue; + } + + /* Normal EOF. */ + break; + } + + /* Command? */ + if (*line == EXP_CONTROL && *++line != EXP_CONTROL) { + Process(line); + continue; + } + + /* Skip the right number of leading fields. */ + for (i = Fields, p = line; *p; p++) + if (*p == ' ' && --i <= 0) + break; + if (*p == '\0') + /* Nothing to write. Probably shouldn't happen. */ + continue; + + /* Add a newline, get the length of all leading fields. */ + *p++ = '\n'; + i = p - line; + + if (GetTimeInfo(&Now) < 0) { + syswarn("cannot get time"); + break; + } + + /* Rest of the line is space-separated list of filenames. */ + for (; *p; p = next) { + /* Skip whitespace, get next word. */ + while (*p == ' ') + p++; + for (next = p; *next && *next != ' '; next++) + continue; + if (*next) + *next++ = '\0'; + + SITEwrite(p, line, i); + } + + } + + SITEcloseall(); + exit(0); + /* NOTREACHED */ +} diff --git a/backends/crosspost.c b/backends/crosspost.c new file mode 100644 index 0000000..bef4fb2 --- /dev/null +++ b/backends/crosspost.c @@ -0,0 +1,338 @@ +/* $Id: crosspost.c 6135 2003-01-19 01:15:40Z rra $ +** +** Parse input to add links for cross posted articles. Input format is one +** line per article. Dots '.' are changed to '/'. Commas ',' or blanks +** ' ' separate entries. Typically this is via a channel feed from innd +** though an edit of the history file can also be used for recovery +** purposes. Sample newsfeeds entry: +** +** # Create the links for cross posted articles +** crosspost:*:Tc,Ap,WR:/usr/local/newsbin/crosspost +** +** WARNING: This no longer works with the current INN; don't use it +** currently. It still exists in the source tree in case someone will +** want to clean it up and make it useable again. +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include +#include +#include + +#include "inn/qio.h" +#include "libinn.h" +#include "paths.h" + + +static char *Dir; + +static int debug = false; +static int syncfiles = true; + +static unsigned long STATTime = 0; +static unsigned long STATMissing = 0; /* Source file missing */ +static unsigned long STATTooLong = 0; /* Name Too Long (src or dest) */ +static unsigned long STATLink = 0; /* Link done */ +static unsigned long STATLError = 0; /* Link problem */ +static unsigned long STATSymlink = 0; /* Symlink done */ +static unsigned long STATSLError = 0; /* Symlink problem */ +static unsigned long STATMkdir = 0; /* Mkdir done */ +static unsigned long STATMdError = 0; /* Mkdir problem */ +static unsigned long STATOError = 0; /* Other errors */ + +#define MAXXPOST 256 +#define STATREFRESH 10800 /* 3 hours */ + +/* +** Write some statistics and reset all counters. +*/ +void +ProcessStats() +{ + time_t Time; + + Time = time (NULL); + syslog(L_NOTICE, + "seconds %lu links %lu %lu symlinks %lu %lu mkdirs %lu %lu missing %lu toolong %lu other %lu", + Time - STATTime, STATLink, STATLError, STATSymlink, STATSLError, + STATMkdir, STATMdError, STATMissing, STATTooLong, STATOError); + + STATMissing = STATTooLong = STATLink = STATLError = 0; + STATSymlink = STATSLError = STATMkdir = STATMdError = STATOError = 0; + STATTime = Time; +} + +/* +** Try to make one directory. Return false on error. +*/ +static bool +MakeDir(Name) + char *Name; +{ + struct stat Sb; + + if (mkdir(Name, GROUPDIR_MODE) >= 0) { + STATMkdir++; + return true; + } + + /* See if it failed because it already exists. */ + return stat(Name, &Sb) >= 0 && S_ISDIR(Sb.st_mode); +} + + +/* +** Make spool directory. Return false on error. +*/ +static bool +MakeSpoolDir(Name) + char *Name; +{ + char *p; + bool made; + + /* Optimize common case -- parent almost always exists. */ + if (MakeDir(Name)) + return true; + + /* Try to make each of comp and comp/foo in turn. */ + for (p = Name; *p; p++) + if (*p == '/') { + *p = '\0'; + made = MakeDir(Name); + *p = '/'; + if (!made) { + STATMdError++; + return false; + } + } + + return MakeDir(Name); +} + + +/* +** Process the input. Data can come from innd: +** news/group/name/ [space news/group/]... +** or +** news.group.name/,[news.group.name/]... +*/ +static void +ProcessIncoming(qp) + QIOSTATE *qp; +{ + char *p; + int i; + int nxp; + int fd; + int lnval ; + char *names[MAXXPOST]; + + + for ( ; ; ) { + + if (time(NULL) - STATTime > STATREFRESH) + ProcessStats(); + + /* Read the first line of data. */ + if ((p = QIOread(qp)) == NULL) { + if (QIOtoolong(qp)) { + fprintf(stderr, "crosspost line too long\n"); + STATTooLong++; + continue; + } + break; + } + + for (i = 0; *p && (i < MAXXPOST); i++) { /* parse input into array */ + names[i] = p; + for ( ; *p; p++) { + if (*p == '.') *p++ = '/'; /* dot to slash translation */ + else if ((*p == ',') /* name separators */ + || (*p == ' ') + || (*p == '\t') + || (*p == '\n')) { + *p++ = '\0'; + break; + } + } + } + nxp = i; + if (debug) { + for (i = 0; i < nxp; i++) + fprintf(stderr, "crosspost: debug %d %s\n", + i, names[i]); + } + + if(syncfiles) fd = open(names[0], O_RDWR); + + for (i = 1; i < nxp; i++) { + lnval = link(names[0], names[i]) ; + if (lnval == 0) STATLink++; + if (lnval < 0 && errno != EXDEV) { /* first try to link */ + int j; + char path[SPOOLNAMEBUFF+2]; + + for (j = 0; (path[j] = names[i][j]) != '\0' ; j++) ; + for (j--; (j > 0) && (path[j] != '/'); j--) ; + if (path[j] == '/') { + path[j] = '\0'; + /* try making parent dir */ + if (MakeSpoolDir(path) == false) { + fprintf(stderr, "crosspost cant mkdir %s\n", + path); + } + else { + /* 2nd try to link */ + lnval = link(names[0], names[i]) ; + if (lnval == 0) STATLink++; + if (lnval < 0 && errno == EXDEV) { +#if !defined(HAVE_SYMLINK) + fprintf(stderr, "crosspost cant link %s %s", + names[0], names[i]); + perror(" "); +#else + /* Try to make a symbolic link + ** to the full pathname. + */ + for (j = 0, p = Dir; (j < SPOOLNAMEBUFF) && *p; ) + path[j++] = *p++; /* copy spool dir */ + if (j < SPOOLNAMEBUFF) path[j++] = '/'; + for (p = names[0]; (j < SPOOLNAMEBUFF) && *p; ) + path[j++] = *p++; /* append path */ + path[j++] = '\0'; + if (symlink(path, names[i]) < 0) { + fprintf(stderr, + "crosspost cant symlink %s %s", + path, names[i]); + perror(" "); + STATSLError++; + } + else + STATSymlink++; +#endif /* !defined(HAVE_SYMLINK) */ + } else if (lnval < 0) { + if (lnval == ENOENT) + STATMissing++; + else { + fprintf(stderr, "crosspost cant link %s %s", + names[0], names[i]); + perror(" "); + STATLError++; + } + } + } + } else { + fprintf(stderr, "crosspost bad path %s\n", + names[i]); + STATOError++; + } + } else if (lnval < 0) { + int j; + char path[SPOOLNAMEBUFF+2]; + +#if !defined(HAVE_SYMLINK) + fprintf(stderr, "crosspost cant link %s %s", + names[0], names[i]); + perror(" "); +#else + /* Try to make a symbolic link + ** to the full pathname. + */ + for (j = 0, p = Dir; (j < SPOOLNAMEBUFF) && *p; ) + path[j++] = *p++; /* copy spool dir */ + if (j < SPOOLNAMEBUFF) path[j++] = '/'; + for (p = names[0]; (j < SPOOLNAMEBUFF) && *p; ) + path[j++] = *p++; /* append path */ + path[j++] = '\0'; + if (symlink(path, names[i]) < 0) { + fprintf(stderr, + "crosspost cant symlink %s %s", + path, names[i]); + perror(" "); + STATSLError++; + } + else + STATSymlink++; +#endif /* !defined(HAVE_SYMLINK) */ + } + } + + if (syncfiles && (fd >= 0)) { + fsync(fd); + close(fd); + } + } + + if (QIOerror(qp)) + fprintf(stderr, "crosspost cant read %s\n", strerror(errno)); + QIOclose(qp); +} + + +static void +Usage(void) +{ + fprintf(stderr, "usage: crosspost [-D dir] [files...]\n"); + exit(1); +} + + +int +main(ac, av) + int ac; + char *av[]; +{ + int i; + QIOSTATE *qp; + + /* Set defaults. */ + if (ReadInnConf() < 0) exit(1); + Dir = innconf->patharticles; + umask(NEWSUMASK); + + /* Parse JCL. */ + while ((i = getopt(ac, av, "D:ds")) != EOF) + switch (i) { + default: + Usage(); + /* NOTREACHED */ + case 'D': + Dir = optarg; /* specify spool path */ + break; + case 'd': + debug = true; + break; + case 's': + syncfiles = false; /* do not fsync articles */ + break; + } + ac -= optind; + av += optind; + + if (chdir(Dir) < 0) { + fprintf(stderr, "crosspost cant chdir %s %s\n", + Dir, strerror(errno)); + exit(1); + } + openlog("crosspost", L_OPENLOG_FLAGS | LOG_PID, LOG_INN_PROG); + STATTime = time (NULL); + if (ac == 0) + ProcessIncoming(QIOfdopen(STDIN_FILENO)); + else { + for ( ; *av; av++) + if (strcmp(*av, "-") == 0) + ProcessIncoming(QIOfdopen(STDIN_FILENO)); + else if ((qp = QIOopen(*av)) == NULL) + fprintf(stderr, "crosspost cant open %s %s\n", + *av, strerror(errno)); + else + ProcessIncoming(qp); + } + + ProcessStats(); + exit(0); + /* NOTREACHED */ +} diff --git a/backends/cvtbatch.c b/backends/cvtbatch.c new file mode 100644 index 0000000..ba57ef8 --- /dev/null +++ b/backends/cvtbatch.c @@ -0,0 +1,128 @@ +/* $Id: cvtbatch.c 6135 2003-01-19 01:15:40Z rra $ +** +** Read file list on standard input and spew out batchfiles. +*/ + +#include "config.h" +#include "clibrary.h" + +#include "inn/innconf.h" +#include "inn/messages.h" +#include "inn/qio.h" +#include "inn/wire.h" +#include "libinn.h" +#include "paths.h" +#include "storage.h" + + +int +main(int ac, char *av[]) { + int i; + QIOSTATE *qp; + char *line; + const char *text; + char *format; + char *p, *q; + const char *r; + bool Dirty; + TOKEN token; + ARTHANDLE *art; + int len; + + /* First thing, set up our identity. */ + message_program_name = "cvtbatch"; + if (!innconf_read(NULL)) + exit(1); + + /* Parse JCL. */ + format = xstrdup("nm"); + while ((i = getopt(ac, av, "w:")) != EOF) + switch (i) { + default: + die("usage error"); + break; + case 'w': + for (p = format = optarg; *p; p++) { + switch (*p) { + case FEED_BYTESIZE: + case FEED_FULLNAME: + case FEED_MESSAGEID: + case FEED_NAME: + continue; + } + warn("ignoring %c in -w flag", *p); + } + } + ac -= optind; + av += optind; + if (ac) + die("usage error"); + + if (!SMinit()) + die("cannot initialize storage manager: %s", SMerrorstr); + + /* Loop over all input. */ + qp = QIOfdopen((int)fileno(stdin)); + while ((line = QIOread(qp)) != NULL) { + for (p = line; *p; p++) + if (ISWHITE(*p)) { + *p = '\0'; + break; + } + + if (!IsToken(line)) + continue; + token = TextToToken(line); + if ((art = SMretrieve(token, RETR_HEAD)) == NULL) + continue; + text = wire_findheader(art->data, art->len, "Message-ID"); + if (text == NULL) { + SMfreearticle(art); + continue; + } + len = art->len; + for (r = text; r < art->data + art->len; r++) { + if (*r == '\r' || *r == '\n') + break; + } + if (r == art->data + art->len) { + SMfreearticle(art); + continue; + } + q = xmalloc(r - text + 1); + memcpy(q, text, r - text); + SMfreearticle(art); + q[r - text] = '\0'; + + /* Write the desired info. */ + for (Dirty = false, p = format; *p; p++) { + switch (*p) { + default: + continue; + case FEED_BYTESIZE: + if (Dirty) + putchar(' '); + printf("%d", len); + break; + case FEED_FULLNAME: + case FEED_NAME: + if (Dirty) + putchar(' '); + printf("%s", line); + break; + case FEED_MESSAGEID: + if (Dirty) + putchar(' '); + printf("%s", q); + break; + } + Dirty = true; + } + free(q); + if (Dirty) + putchar('\n'); + } + + exit(0); + /* NOTREACHED */ +} diff --git a/backends/filechan.c b/backends/filechan.c new file mode 100644 index 0000000..93e81a8 --- /dev/null +++ b/backends/filechan.c @@ -0,0 +1,132 @@ +/* $Id: filechan.c 6135 2003-01-19 01:15:40Z rra $ +** +** An InterNetNews channel process that splits a funnel entry into +** separate files. Originally from Robert Elz . +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include +#include + +#include "inn/innconf.h" +#include "inn/messages.h" +#include "libinn.h" +#include "paths.h" + +#include "map.h" + +int +main(int ac, char *av[]) +{ + char buff[2048]; + char *p; + char *next; + int i; + int fd; + int Fields; + const char *Directory; + bool Map; + FILE *F; + struct stat Sb; + uid_t uid; + gid_t gid; + uid_t myuid; + + /* First thing, set up our identity. */ + message_program_name = "filechan"; + + /* Set defaults. */ + if (!innconf_read(NULL)) + exit(1); + Fields = 1; + Directory = innconf->pathoutgoing; + Map = false; + myuid = geteuid(); + umask(NEWSUMASK); + + /* Parse JCL. */ + while ((i = getopt(ac, av, "d:f:m:p:")) != EOF) + switch (i) { + default: + die("usage error"); + break; + case 'd': + Directory = optarg; + break; + case 'f': + Fields = atoi(optarg); + break; + case 'm': + Map = true; + MAPread(optarg); + break; + case 'p': + if ((F = fopen(optarg, "w")) == NULL) + sysdie("cannot fopen %s", optarg); + fprintf(F, "%ld\n", (long)getpid()); + if (ferror(F) || fclose(F) == EOF) + sysdie("cannot fclose %s", optarg); + break; + } + + /* Move, and get owner of current directory. */ + if (chdir(Directory) < 0) + sysdie("cannot chdir to %s", Directory); + if (stat(".", &Sb) < 0) + sysdie("cannot stat %s", Directory); + uid = Sb.st_uid; + gid = Sb.st_gid; + + /* Read input. */ + while (fgets(buff, sizeof buff, stdin) != NULL) { + if ((p = strchr(buff, '\n')) != NULL) + *p = '\0'; + + /* Skip the right number of leading fields. */ + for (i = Fields, p = buff; *p; p++) + if (*p == ' ' && --i <= 0) + break; + if (*p == '\0') + /* Nothing to write. Probably shouldn't happen. */ + continue; + + /* Add a newline, get the length of all leading fields. */ + *p++ = '\n'; + i = p - buff; + + /* Rest of the line is space-separated list of filenames. */ + for (; *p; p = next) { + /* Skip whitespace, get next word. */ + while (*p == ' ') + p++; + for (next = p; *next && *next != ' '; next++) + continue; + if (*next) + *next++ = '\0'; + + if (Map) + p = MAPname(p); + fd = open(p, O_CREAT | O_WRONLY | O_APPEND, BATCHFILE_MODE); + if (fd >= 0) { + /* Try to lock it and set the ownership right. */ + inn_lock_file(fd, INN_LOCK_WRITE, true); + if (myuid == 0 && uid != 0) + chown(p, uid, gid); + + /* Just in case, seek to the end. */ + lseek(fd, 0, SEEK_END); + + errno = 0; + if (write(fd, buff, i) != i) + sysdie("write failed"); + + close(fd); + } + } + } + + exit(0); + /* NOTREACHED */ +} diff --git a/backends/inndf.c b/backends/inndf.c new file mode 100644 index 0000000..6a85c86 --- /dev/null +++ b/backends/inndf.c @@ -0,0 +1,335 @@ +/* $Id: inndf.c 6677 2004-03-03 18:36:07Z hkehoe $ +** +** Reports free kilobytes (not disk blocks) or free inodes. +** +** Written by Ian Dickinson +** Wed Jul 26 10:11:38 BST 1995 (My birthday - 27 today!) +** +** inndf is a replacement for 'df | awk' in innwatch.ctl and for reporting +** free space in other INN scripts. It doesn't sync, it forks less, and +** it's generally less complicated. +** +** Usage: inndf [-i] [ ...] +** inndf -n +** inndf -o +** +** Compile with -lserver (ie. /usr/lib/libserver.a) if you run Sun's Online +** DiskSuite under SunOS 4.x. The wrapper functions there make the system +** call transparent; they copy the f_spare values to the correct spots, so +** f_blocks, f_bfree, f_bavail can exceed 2GB. +** +** Compile with -DHAVE_STATVFS for these systems: +** System V Release 4.x +** Solaris 2.x +** HP-UX 10.x +** OSF1 +** +** Compile with -DHAVE_STATFS for these systems: +** SunOS 4.x/Solaris 1.x +** HP-UX 9.x +** Linux +** NeXTstep 3.x +** +** (Or even better, let autoconf take care of it.) +** +** Thanks to these folks for bug fixes and porting information: +** Mahesh Ramachandran +** Chuck Swiger +** Sang-yong Suh +** Swa Frantzen +** Brad Dickey +** Taso N. Devetzis +** Wei-Yeh Lee +** Jeff Garzik +*/ + +#include "config.h" +#include "clibrary.h" + +#include "inn/innconf.h" +#include "inn/messages.h" +#include "inn/qio.h" +#include "libinn.h" +#include "ov.h" +#include "paths.h" + +/* The portability mess. Hide everything in macros so that the actual code + is relatively clean. SysV uses statvfs, BSD uses statfs, and ULTRIX is + just weird (and isn't worth checking for in configure). + + df_declare declares a variable of the appropriate type to pass to df_stat + along with a path; df_stat will return true on success, false on failure. + df_avail gives the number of free blocks, the size of those blocks given + in df_bsize (which handles SysV's weird fragment vs. preferred block size + thing). df_inodes returns the free inodes. */ +#if HAVE_STATVFS +# include +# define df_stat(p, s) (statvfs((p), (s)) == 0) +# define df_declare(s) struct statvfs s +# define df_total(s) ((s).f_blocks) +# define df_avail(s) ((s).f_bavail) +# define df_scale(s) ((s).f_frsize == 0 ? (s).f_bsize : (s).f_frsize) +# define df_files(s) ((s).f_files) +# define df_favail(s) ((s).f_favail) +#elif HAVE_STATFS +# if HAVE_SYS_VFS_H +# include +# endif +# if HAVE_SYS_PARAM_H +# include +# endif +# if HAVE_SYS_MOUNT_H +# include +# endif +# ifdef __ultrix__ +# define df_stat(p, s) (statfs((p), (s)) >= 1) +# define df_declare(s) struct fs_data s +# define df_total(s) ((s).fd_btot) +# define df_avail(s) ((s).fd_bfreen) +# define df_scale(s) 1024 +# define df_files(s) ((s).fd_gtot) +# define df_favail(s) ((s).fd_gfree) +# else +# define df_stat(p, s) (statfs((p), (s)) == 0) +# define df_declare(s) struct statfs s +# define df_total(s) ((s).f_blocks) +# define df_avail(s) ((s).f_bavail) +# define df_scale(s) ((s).f_bsize) +# define df_files(s) ((s).f_files) +# define df_favail(s) ((s).f_ffree) +# endif +#else +# error "Platform not supported. Neither statvfs nor statfs available." +#endif + +static const char usage[] = "\ +Usage: inndf [-i] [-f filename] [-F] [ ...]\n\ + inndf -n\n\ + inndf -o\n\ +\n\ +The first form gives the free space in kilobytes (or the count of free\n\ +inodes if -i is given) in the file systems given by the arguments. If\n\ +-f is given, the corresponding file should be a list of directories to\n\ +check in addition to the arguments. -F uses /filesystems as the\n\ +file and is otherwise the same.\n\ +\n\ +The second form gives the total count of overview records stored. The\n\ +third form gives the percentage space allocated to overview that's been\n\ +used (if the overview method used supports this query)."; + +/* +** Given a path, a flag saying whether to look at inodes instead of free +** disk space, and a flag saying whether to format in columns, print out +** the amount of free space or inodes on that file system. Returns the +** percentage free, which may be printed out by the caller. +*/ +static void +printspace(const char *path, bool inode, bool fancy) +{ + df_declare(info); + unsigned long amount; + double percent; + + if (df_stat(path, &info)) { + if (inode) { + amount = df_favail(info); + + /* This value is compared using the shell by innwatch, and some + shells can't cope with anything larger than the maximum value + of a signed long. ReiserFS returns 2^32 - 1, however, since it + has no concept of inodes. So cap the returned value at the max + value of a signed long. */ + if (amount > (1UL << 31) - 1) + amount = (1UL << 31) - 1; + + /* 2.6 kernels show 0 available and used inodes, instead. */ + if (amount == 0 && df_files(info) == 0) + amount = (1UL << 31) - 1; + } else { + /* Do the multiplication in floating point to try to retain + accuracy if the free space in bytes would overflow an + unsigned long. This should be safe until file systems larger + than 4TB (which may not be much longer -- we should use long + long instead if we have it). + + Be very careful about the order of casts here; it's too + easy to cast back into an unsigned long a value that + overflows, and one then gets silently wrong results. */ + amount = (unsigned long) + (((double) df_avail(info) * df_scale(info)) / 1024.0); + } + } else { + /* On error, free space is zero. */ + amount = 0; + } + printf(fancy ? "%10lu" : "%lu", amount); + if (fancy) { + printf(inode ? " inodes available " : " Kbytes available "); + if (inode) + percent = 100 * ((double) df_favail(info) / df_files(info)); + else + percent = 100 * ((double) df_avail(info) / df_total(info)); + if (percent < 9.95) + printf(" (%3.1f%%)", percent); + else if (percent < 99.95) + printf(" (%4.1f%%)", percent); + else + printf("(%5.1f%%)", percent); + } +} + +static void +printspace_formatted(const char *path, bool inode) +{ + printf("%-40s ", path); + printspace(path, inode, true); + printf("\n"); +} + +static char * +readline(QIOSTATE *qp) +{ + char *line, *p; + + for (line = QIOread(qp); line != NULL; line = QIOread(qp)) { + p = strchr(line, '#'); + if (p != NULL) + *p = '\0'; + for (; *line == ' ' || *line == '\t'; line++) + ; + if (*line != '\0') { + for (p = line; *p != '\0' && *p != ' ' && *p != '\t'; p++) + ; + *p = '\0'; + return line; + } + } + return NULL; +} + +int +main(int argc, char *argv[]) +{ + int option, i, count; + unsigned long total; + QIOSTATE *qp; + char *active, *group, *line, *p; + char *file = NULL; + bool inode = false; + bool overview = false; + bool ovcount = false; + bool use_filesystems = false; + + while ((option = getopt(argc, argv, "hinof:F")) != EOF) { + switch (option) { + default: + die(usage); + case 'h': + printf("%s\n", usage); + exit(0); + case 'i': + inode = true; + break; + case 'n': + ovcount = true; + break; + case 'o': + overview = true; + break; + case 'f': + if (file != NULL) + die("inndf: Only one of -f or -F may be given"); + file = xstrdup(optarg); + break; + case 'F': + if (file != NULL) + die("inndf: Only one of -f or -F may be given"); + if (!innconf_read(NULL)) + exit(1); + file = concatpath(innconf->pathetc, INN_PATH_FILESYSTEMS); + use_filesystems = true; + break; + } + } + argc -= optind; + argv += optind; + + if (argc == 0 && !overview && !ovcount && file == NULL) + die(usage); + + /* Set the program name now rather than earlier so that it doesn't get + prepended to usage messages. */ + message_program_name = "inndf"; + + /* If directories were specified, get statistics about them. If only + one was given, just print out the number without the path or any + explanatory text; this mode is used by e.g. innwatch. Otherwise, + format things nicely. */ + if (argc == 1 && !overview && !ovcount && file == NULL) { + printspace(argv[0], inode, false); + printf("\n"); + } else { + for (i = 0; i < argc; i++) + printspace_formatted(argv[i], inode); + if (file != NULL) { + qp = QIOopen(file); + if (qp == NULL) { + if (!use_filesystems) + sysdie("can't open %s", file); + } else { + line = readline(qp); + while (line != NULL) { + printspace_formatted(line, inode); + line = readline(qp); + } + QIOclose(qp); + } + free(file); + } + } + + /* If we're going to be getting information from overview, do the icky + initialization stuff. */ + if (overview || ovcount) { + if (!use_filesystems) + if (!innconf_read(NULL)) + exit(1); + if (!OVopen(OV_READ)) + die("OVopen failed"); + } + + /* For the count, we have to troll through the active file and query the + overview backend for each group. */ + if (ovcount) { + active = concatpath(innconf->pathdb, _PATH_ACTIVE); + qp = QIOopen(active); + if (qp == NULL) + sysdie("can't open %s", active); + + total = 0; + group = QIOread(qp); + while (group != NULL) { + p = strchr(group, ' '); + if (p != NULL) + *p = '\0'; + if (OVgroupstats(group, NULL, NULL, &count, NULL)) + total += count; + group = QIOread(qp); + } + QIOclose(qp); + printf("%lu overview records stored\n", total); + } + + /* Percentage used is simpler, but only some overview methods understand + that query. */ + if (overview) { + if (OVctl(OVSPACE, &count)) { + if (count == -1) + printf("Space used is meaningless for the %s method\n", + innconf->ovmethod); + else + printf("%d%% overview space used\n", count); + } + } + exit(0); +} diff --git a/backends/innxbatch.c b/backends/innxbatch.c new file mode 100644 index 0000000..9ebfe9c --- /dev/null +++ b/backends/innxbatch.c @@ -0,0 +1,550 @@ +/* $Id: innxbatch.c 6351 2003-05-19 02:00:06Z rra $ +** +** Transmit batches to remote site, using the XBATCH command +** Modelled after innxmit.c and nntpbatch.c +** +** Invocation: +** innxbatch [options] ... +#ifdef FROMSTDIN +** innxbatch -i +#endif FROMSTDIN +** will connect to serverhost's nntp port, and transfer the named files, +** with an xbatch command for every file. Files that have been sent +** successfully are unlink()ed. In case of any error, innxbatch terminates +** and leaves any remaining files untouched, for later transmission. +** Options: +** -D increase debug level +** -v report statistics to stdout +#ifdef FROMSTDIN +** -i read batch file names from stdin instead from command line. +** For each successfully transmitted batch, an OK is printed on +** stdout, to indicate that another file name is expected. +#endif +** -t Timeout for connection attempt +** -T Timeout for batch transfers. +** We do not use any file locking. At worst, a batch could be transmitted +** twice in parallel by two independant invocations of innxbatch. +** To prevent this, innxbatch should be invoked by a shell script that uses +** shlock(1) to achieve mutual exclusion. +*/ + +#include "config.h" +#include "clibrary.h" +#include "portable/socket.h" +#include "portable/time.h" +#include +#include +#include +#include +#include +#include +#include + +/* Needed on AIX 4.1 to get fd_set and friends. */ +#ifdef HAVE_SYS_SELECT_H +# include +#endif + +#include "inn/innconf.h" +#include "inn/messages.h" +#include "inn/timer.h" +#include "libinn.h" +#include "nntp.h" + +/* +** Syslog formats - collected together so they remain consistent +*/ +static char STAT1[] = + "%s stats offered %lu accepted %lu refused %lu rejected %lu"; +static char STAT2[] = "%s times user %.3f system %.3f elapsed %.3f"; +static char CANT_CONNECT[] = "%s connect failed %s"; +static char CANT_AUTHENTICATE[] = "%s authenticate failed %s"; +static char XBATCH_FAIL[] = "%s xbatch failed %s"; +static char UNKNOWN_REPLY[] = "Unknown reply after sending batch -- %s"; +static char CANNOT_UNLINK[] = "cannot unlink %s: %m"; +/* +** Global variables. +*/ +static bool Debug = 0; +static bool STATprint; +static char *REMhost; +static double STATbegin; +static double STATend; +static char *XBATCHname; +static int FromServer; +static int ToServer; +static sig_atomic_t GotAlarm; +static sig_atomic_t GotInterrupt; +static sig_atomic_t JMPyes; +static jmp_buf JMPwhere; +static unsigned long STATaccepted; +static unsigned long STAToffered; +static unsigned long STATrefused; +static unsigned long STATrejected; + +/* +** Send a line to the server. \r\n will be appended +*/ +static bool +REMwrite(int fd, char *p) +{ + int i; + int err; + char *dest; + static char buff[NNTP_STRLEN]; + + for (dest = buff, i = 0; p[i]; ) *dest++ = p[i++]; + *dest++ = '\r'; + *dest++ = '\n'; + *dest++ = '\0'; + + for (dest = buff, i+=2; i; dest += err, i -= err) { + err = write(fd, dest, i); + if (err < 0) { + syswarn("cannot write %s to %s", dest, REMhost); + return false; + } + } + if (Debug) + fprintf(stderr, "> %s\n", p); + + return true; +} + +/* +** Print transfer statistics, clean up, and exit. +*/ +static void +ExitWithStats(int x) +{ + static char QUIT[] = "quit"; + double usertime; + double systime; + + REMwrite(ToServer, QUIT); + + STATend = TMRnow_double(); + if (GetResourceUsage(&usertime, &systime) < 0) { + usertime = 0; + systime = 0; + } + + if (STATprint) { + printf(STAT1, + REMhost, STAToffered, STATaccepted, STATrefused, STATrejected); + printf("\n"); + printf(STAT2, REMhost, usertime, systime, STATend - STATbegin); + printf("\n"); + } + + syslog(L_NOTICE, STAT1, + REMhost, STAToffered, STATaccepted, STATrefused, STATrejected); + syslog(L_NOTICE, STAT2, REMhost, usertime, systime, STATend - STATbegin); + + exit(x); + /* NOTREACHED */ +} + + +/* +** Clean up the NNTP escapes from a line. +*/ +static char * +REMclean(char *buff) +{ + char *p; + + if ((p = strchr(buff, '\r')) != NULL) + *p = '\0'; + if ((p = strchr(buff, '\n')) != NULL) + *p = '\0'; + + /* The dot-escape is only in text, not command responses. */ + return buff; +} + + +/* +** Read a line of input, with timeout. We expect only answer lines, so +** we ignore \r\n-->\n mapping and the dot escape. +** Return true if okay, *or we got interrupted.* +*/ +static bool +REMread(char *start, int size) +{ + char *p, *h; + struct timeval t; + fd_set rmask; + int i; + + for (p = start; size; ) { + FD_ZERO(&rmask); + FD_SET(FromServer, &rmask); + t.tv_sec = 10 * 60; + t.tv_usec = 0; + i = select(FromServer + 1, &rmask, NULL, NULL, &t); + if (GotInterrupt) return true; + if (i < 0) { + if (errno == EINTR) continue; + else return false; + } + if (i == 0 || !FD_ISSET(FromServer, &rmask)) return false; + i = read(FromServer, p, size-1); + if (GotInterrupt) return true; + if (i <= 0) return false; + h = p; + p += i; + size -= i; + for ( ; h < p; h++) { + if (h > start && '\n' == *h && '\r' == h[-1]) { + *h = h[-1] = '\0'; + size = 0; + } + } + } + + if (Debug) + fprintf(stderr, "< %s\n", start); + + return true; +} + + +/* +** Handle the interrupt. +*/ +static void +Interrupted(void) +{ + warn("interrupted"); + ExitWithStats(1); +} + + +/* +** Send a whole xbatch to the server. Uses the global variables +** REMbuffer & friends +*/ +static bool +REMsendxbatch(int fd, char *buf, int size) +{ + char *p; + int i; + int err; + + for (i = size, p = buf; i; p += err, i -= err) { + err = write(fd, p, i); + if (err < 0) { + syswarn("cannot write xbatch to %s", REMhost); + return false; + } + } + if (GotInterrupt) Interrupted(); + if (Debug) + fprintf(stderr, "> [%d bytes of xbatch]\n", size); + + /* What did the remote site say? */ + if (!REMread(buf, size)) { + syswarn("no reply after sending xbatch"); + return false; + } + if (GotInterrupt) Interrupted(); + + /* Parse the reply. */ + switch (atoi(buf)) { + default: + warn("unknown reply after sending batch -- %s", buf); + syslog(L_ERROR, UNKNOWN_REPLY, buf); + return false; + /* NOTREACHED */ + break; + case NNTP_RESENDIT_VAL: + case NNTP_GOODBYE_VAL: + syslog(L_NOTICE, XBATCH_FAIL, REMhost, buf); + STATrejected++; + return false; + /* NOTREACHED */ + break; + case NNTP_OK_XBATCHED_VAL: + STATaccepted++; + if (Debug) fprintf(stderr, "will unlink(%s)\n", XBATCHname); + if (unlink(XBATCHname)) { + /* probably another incarantion was faster, so avoid further duplicate + * work + */ + syswarn("cannot unlink %s", XBATCHname); + syslog(L_NOTICE, CANNOT_UNLINK, XBATCHname); + return false; + } + break; + } + + /* Article sent */ + return true; +} + +/* +** Mark that we got interrupted. +*/ +static RETSIGTYPE +CATCHinterrupt(int s) +{ + GotInterrupt = true; + + /* Let two interrupts kill us. */ + xsignal(s, SIG_DFL); +} + + +/* +** Mark that the alarm went off. +*/ +/* ARGSUSED0 */ +static RETSIGTYPE +CATCHalarm(int s UNUSED) +{ + GotAlarm = true; + if (JMPyes) + longjmp(JMPwhere, 1); +} + + +/* +** Print a usage message and exit. +*/ +static void +Usage(void) +{ + warn("Usage: innxbatch [-Dv] [-t#] [-T#] host file ..."); +#ifdef FROMSTDIN + warn(" innxbatch [-Dv] [-t#] [-T#] -i host"); +#endif + exit(1); +} + + +int +main(int ac, char *av[]) +{ + int i; + char *p; + FILE *From; + FILE *To; + char buff[NNTP_STRLEN]; + RETSIGTYPE (*old)(int) = NULL; + unsigned int ConnectTimeout; + unsigned int TotalTimeout; + struct stat statbuf; + int fd; + int err; + char *XBATCHbuffer = NULL; + int XBATCHbuffersize = 0; + int XBATCHsize; + + openlog("innxbatch", L_OPENLOG_FLAGS | LOG_PID, LOG_INN_PROG); + message_program_name = "innxbatch"; + + /* Set defaults. */ + if (!innconf_read(NULL)) + exit(1); + ConnectTimeout = 0; + TotalTimeout = 0; + umask(NEWSUMASK); + + /* Parse JCL. */ + while ((i = getopt(ac, av, "Dit:T:v")) != EOF) + switch (i) { + default: + Usage(); + /* NOTREACHED */ + break; + case 'D': + Debug++; + break; +#ifdef FROMSTDIN + case 'i': + FromStdin = true; + break; +#endif + case 't': + ConnectTimeout = atoi(optarg); + break; + case 'T': + TotalTimeout = atoi(optarg); + break; + case 'v': + STATprint = true; + break; + } + ac -= optind; + av += optind; + + /* Parse arguments; host and filename. */ + if (ac < 2) + Usage(); + REMhost = av[0]; + ac--; + av++; + + /* Open a connection to the remote server. */ + if (ConnectTimeout) { + GotAlarm = false; + old = xsignal(SIGALRM, CATCHalarm); + JMPyes = true; + if (setjmp(JMPwhere)) + die("cannot connect to %s: timed out", REMhost); + alarm(ConnectTimeout); + } + if (NNTPconnect(REMhost, NNTP_PORT, &From, &To, buff) < 0 || GotAlarm) { + i = errno; + warn("cannot connect to %s: %s", REMhost, + buff[0] ? REMclean(buff): strerror(errno)); + if (GotAlarm) + syslog(L_NOTICE, CANT_CONNECT, REMhost, "timeout"); + else + syslog(L_NOTICE, CANT_CONNECT, REMhost, + buff[0] ? REMclean(buff) : strerror(i)); + exit(1); + } + + if (Debug) + fprintf(stderr, "< %s\n", REMclean(buff)); + if (NNTPsendpassword(REMhost, From, To) < 0 || GotAlarm) { + i = errno; + syswarn("cannot authenticate with %s", REMhost); + syslog(L_ERROR, CANT_AUTHENTICATE, + REMhost, GotAlarm ? "timeout" : strerror(i)); + /* Don't send quit; we want the remote to print a message. */ + exit(1); + } + if (ConnectTimeout) { + alarm(0); + xsignal(SIGALRM, old); + JMPyes = false; + } + + /* We no longer need standard I/O. */ + FromServer = fileno(From); + ToServer = fileno(To); + +#if defined(SOL_SOCKET) && defined(SO_SNDBUF) && defined(SO_RCVBUF) + i = 24 * 1024; + if (setsockopt(ToServer, SOL_SOCKET, SO_SNDBUF, (char *)&i, sizeof i) < 0) + perror("cant setsockopt(SNDBUF)"); + if (setsockopt(FromServer, SOL_SOCKET, SO_RCVBUF, (char *)&i, sizeof i) < 0) + perror("cant setsockopt(RCVBUF)"); +#endif /* defined(SOL_SOCKET) && defined(SO_SNDBUF) && defined(SO_RCVBUF) */ + + GotInterrupt = false; + GotAlarm = false; + + /* Set up signal handlers. */ + xsignal(SIGHUP, CATCHinterrupt); + xsignal(SIGINT, CATCHinterrupt); + xsignal(SIGTERM, CATCHinterrupt); + xsignal(SIGPIPE, SIG_IGN); + if (TotalTimeout) { + xsignal(SIGALRM, CATCHalarm); + alarm(TotalTimeout); + } + + /* Start timing. */ + STATbegin = TMRnow_double(); + + /* main loop over all specified files */ + for (XBATCHname = *av; ac && (XBATCHname = *av); av++, ac--) { + + if (Debug) fprintf(stderr, "will work on %s\n", XBATCHname); + + if (GotAlarm) { + warn("timed out"); + ExitWithStats(1); + } + if (GotInterrupt) Interrupted(); + + if ((fd = open(XBATCHname, O_RDONLY, 0)) < 0) { + syswarn("cannot open %s, skipping", XBATCHname); + continue; + } + + if (fstat(fd, &statbuf)) { + syswarn("cannot stat %s, skipping", XBATCHname); + close(i); + continue; + } + + XBATCHsize = statbuf.st_size; + if (XBATCHsize == 0) { + warn("batch file %s is zero length, skipping", XBATCHname); + close(i); + unlink(XBATCHname); + continue; + } else if (XBATCHsize > XBATCHbuffersize) { + XBATCHbuffersize = XBATCHsize; + if (XBATCHbuffer) free(XBATCHbuffer); + XBATCHbuffer = xmalloc(XBATCHsize); + } + + err = 0; /* stupid compiler */ + for (i = XBATCHsize, p = XBATCHbuffer; i; i -= err, p+= err) { + err = read(fd, p, i); + if (err < 0) { + syswarn("error reading %s, skipping", XBATCHname); + break; + } else if (0 == err) { + syswarn("unexpected EOF reading %s, truncated", XBATCHname); + XBATCHsize = p - XBATCHbuffer; + break; + } + } + close(fd); + if (err < 0) + continue; + + if (GotInterrupt) Interrupted(); + + /* Offer the xbatch. */ + snprintf(buff, sizeof(buff), "xbatch %d", XBATCHsize); + if (!REMwrite(ToServer, buff)) { + syswarn("cannot offer xbatch to %s", REMhost); + ExitWithStats(1); + } + STAToffered++; + if (GotInterrupt) Interrupted(); + + /* Does he want it? */ + if (!REMread(buff, (int)sizeof buff)) { + syswarn("no reply to XBATCH %d from %s", XBATCHsize, REMhost); + ExitWithStats(1); + } + if (GotInterrupt) Interrupted(); + + /* Parse the reply. */ + switch (atoi(buff)) { + default: + warn("unknown reply to %s -- %s", XBATCHname, buff); + ExitWithStats(1); + /* NOTREACHED */ + break; + case NNTP_RESENDIT_VAL: + case NNTP_GOODBYE_VAL: + /* Most likely out of space -- no point in continuing. */ + syslog(L_NOTICE, XBATCH_FAIL, REMhost, buff); + ExitWithStats(1); + /* NOTREACHED */ + case NNTP_CONT_XBATCH_VAL: + if (!REMsendxbatch(ToServer, XBATCHbuffer, XBATCHsize)) + ExitWithStats(1); + /* NOTREACHED */ + break; + case NNTP_SYNTAX_VAL: + case NNTP_BAD_COMMAND_VAL: + warn("server %s seems not to understand XBATCH: %s", REMhost, buff); + syslog(L_FATAL, XBATCH_FAIL, REMhost, buff); + break; + } + } + ExitWithStats(0); + /* NOTREACHED */ + return 0; +} diff --git a/backends/innxmit.c b/backends/innxmit.c new file mode 100644 index 0000000..475ce63 --- /dev/null +++ b/backends/innxmit.c @@ -0,0 +1,1457 @@ +/* $Id: innxmit.c 6716 2004-05-16 20:26:56Z rra $ +** +** Transmit articles to remote site. +** Modified for NNTP streaming: 1996-01-03 Jerry Aguirre +*/ + +#include "config.h" +#include "clibrary.h" +#include "portable/socket.h" +#include "portable/time.h" +#include +#include +#include +#include +#include +#include +#include +#include + +/* Needed on AIX 4.1 to get fd_set and friends. */ +#ifdef HAVE_SYS_SELECT_H +# include +#endif + +#include "inn/history.h" +#include "inn/innconf.h" +#include "inn/messages.h" +#include "inn/qio.h" +#include "inn/timer.h" +#include "inn/wire.h" +#include "libinn.h" +#include "nntp.h" +#include "paths.h" +#include "storage.h" + +#define OUTPUT_BUFFER_SIZE (16 * 1024) + +/* Streaming extensions to NNTP. This extension removes the lock-step +** limitation of conventional NNTP. Article transfer is several times +** faster. Negotiated and falls back to old mode if receiver refuses. +*/ + +/* max number of articles that can be streamed ahead */ +#define STNBUF 32 + +/* Send "takethis" without "check" if this many articles were +** accepted in a row. +*/ +#define STNC 16 + +/* typical number of articles to stream */ +/* must be able to fopen this many articles */ +#define STNBUFL (STNBUF/2) + +/* number of retries before requeueing to disk */ +#define STNRETRY 5 + +struct stbufs { /* for each article we are procesing */ + char *st_fname; /* file name */ + char *st_id; /* message ID */ + int st_retry; /* retry count */ + int st_age; /* age count */ + ARTHANDLE *art; /* arthandle to read article contents */ + int st_hash; /* hash value to speed searches */ + long st_size; /* article size */ +}; +static struct stbufs stbuf[STNBUF]; /* we keep track of this many articles */ +static int stnq; /* current number of active entries in stbuf */ +static long stnofail; /* Count of consecutive successful sends */ + +static int TryStream = true; /* Should attempt stream negotation? */ +static int CanStream = false; /* Result of stream negotation */ +static int DoCheck = true; /* Should check before takethis? */ +static char modestream[] = "mode stream"; +static char modeheadfeed[] = "mode headfeed"; +static long retries = 0; +static int logRejects = false ; /* syslog the 437 responses. */ + + + +/* +** Syslog formats - collected together so they remain consistent +*/ +static char STAT1[] = + "%s stats offered %lu accepted %lu refused %lu rejected %lu missing %lu accsize %.0f rejsize %.0f"; +static char STAT2[] = "%s times user %.3f system %.3f elapsed %.3f"; +static char GOT_BADCOMMAND[] = "%s rejected %s %s"; +static char REJECTED[] = "%s rejected %s (%s) %s"; +static char REJ_STREAM[] = "%s rejected (%s) %s"; +static char CANT_CONNECT[] = "%s connect failed %s"; +static char CANT_AUTHENTICATE[] = "%s authenticate failed %s"; +static char IHAVE_FAIL[] = "%s ihave failed %s"; + +static char CANT_FINDIT[] = "%s can't find %s"; +static char CANT_PARSEIT[] = "%s can't parse ID %s"; +static char UNEXPECTED[] = "%s unexpected response code %s"; + +/* +** Global variables. +*/ +static bool AlwaysRewrite; +static bool Debug; +static bool DoRequeue = true; +static bool Purging; +static bool STATprint; +static bool HeadersFeed; +static char *BATCHname; +static char *BATCHtemp; +static char *REMhost; +static double STATbegin; +static double STATend; +static FILE *BATCHfp; +static int FromServer; +static int ToServer; +static struct history *History; +static QIOSTATE *BATCHqp; +static sig_atomic_t GotAlarm; +static sig_atomic_t GotInterrupt; +static sig_atomic_t JMPyes; +static jmp_buf JMPwhere; +static char *REMbuffer; +static char *REMbuffptr; +static char *REMbuffend; +static unsigned long STATaccepted; +static unsigned long STAToffered; +static unsigned long STATrefused; +static unsigned long STATrejected; +static unsigned long STATmissing; +static double STATacceptedsize; +static double STATrejectedsize; + + +/* Prototypes. */ +static ARTHANDLE *article_open(const char *path, const char *id); +static void article_free(ARTHANDLE *); + + +/* +** Return true if the history file has the article expired. +*/ +static bool +Expired(char *MessageID) { + return !HISlookup(History, MessageID, NULL, NULL, NULL, NULL); +} + + +/* +** Flush and reset the site's output buffer. Return false on error. +*/ +static bool +REMflush(void) +{ + int i; + + if (REMbuffptr == REMbuffer) return true; /* nothing buffered */ + i = xwrite(ToServer, REMbuffer, (int)(REMbuffptr - REMbuffer)); + REMbuffptr = REMbuffer; + return i < 0 ? false : true; +} + +/* +** Return index to entry matching this message ID. Else return -1. +** The hash is to speed up the search. +** the protocol. +*/ +static int +stindex(char *MessageID, int hash) { + int i; + + for (i = 0; i < STNBUF; i++) { /* linear search for ID */ + if ((stbuf[i].st_id) && (stbuf[i].st_id[0]) + && (stbuf[i].st_hash == hash)) { + int n; + + if (strcasecmp(MessageID, stbuf[i].st_id)) continue; + + /* left of '@' is case sensitive */ + for (n = 0; (MessageID[n] != '@') && (MessageID[n] != '\0'); n++) ; + if (strncmp(MessageID, stbuf[i].st_id, n)) continue; + else break; /* found a match */ + } + } + if (i >= STNBUF) i = -1; /* no match found ? */ + return (i); +} + +/* stidhash(): calculate a hash value for message IDs to speed comparisons */ +static int +stidhash(char *MessageID) { + char *p; + int hash; + + hash = 0; + for (p = MessageID + 1; *p && (*p != '>'); p++) { + hash <<= 1; + if (isascii((int)*p) && isupper((int)*p)) { + hash += tolower(*p); + } else { + hash += *p; + } + } + return hash; +} + +/* stalloc(): save path, ID, and qp into one of the streaming mode entries */ +static int +stalloc(char *Article, char *MessageID, ARTHANDLE *art, int hash) { + int i; + + for (i = 0; i < STNBUF; i++) { + if ((!stbuf[i].st_fname) || (stbuf[i].st_fname[0] == '\0')) break; + } + if (i >= STNBUF) { /* stnq says not full but can not find unused */ + syslog(L_ERROR, "stalloc: Internal error"); + return (-1); + } + if ((int)strlen(Article) >= SPOOLNAMEBUFF) { + syslog(L_ERROR, "stalloc: filename longer than %d", SPOOLNAMEBUFF); + return (-1); + } + /* allocate buffers on first use. + ** If filename ever is longer than SPOOLNAMEBUFF then code will abort. + ** If ID is ever longer than NNTP_STRLEN then other code would break. + */ + if (!stbuf[i].st_fname) + stbuf[i].st_fname = xmalloc(SPOOLNAMEBUFF); + if (!stbuf[i].st_id) + stbuf[i].st_id = xmalloc(NNTP_STRLEN); + strlcpy(stbuf[i].st_fname, Article, SPOOLNAMEBUFF); + strlcpy(stbuf[i].st_id, MessageID, NNTP_STRLEN); + stbuf[i].art = art; + stbuf[i].st_hash = hash; + stbuf[i].st_retry = 0; + stbuf[i].st_age = 0; + stnq++; + return i; +} + +/* strel(): release for reuse one of the streaming mode entries */ +static void +strel(int i) { + if (stbuf[i].art) { + article_free(stbuf[i].art); + stbuf[i].art = NULL; + } + if (stbuf[i].st_id) stbuf[i].st_id[0] = '\0'; + if (stbuf[i].st_fname) stbuf[i].st_fname[0] = '\0'; + stnq--; +} + +/* +** Send a line to the server, adding the dot escape and \r\n. +*/ +static bool +REMwrite(char *p, int i, bool escdot) { + int size; + + /* Buffer too full? */ + if (REMbuffend - REMbuffptr < i + 3) { + if (!REMflush()) + return false; + if (REMbuffend - REMbuffer < i + 3) { + /* Line too long -- grow buffer. */ + size = i * 2; + REMbuffer = xrealloc(REMbuffer, size); + REMbuffend = &REMbuffer[size]; + } + } + + /* Dot escape, text of the line, line terminator. */ + if (escdot && (*p == '.')) + *REMbuffptr++ = '.'; + memcpy(REMbuffptr, p, i); + REMbuffptr += i; + *REMbuffptr++ = '\r'; + *REMbuffptr++ = '\n'; + + return true; +} + + +/* +** Print transfer statistics, clean up, and exit. +*/ +static void +ExitWithStats(int x) +{ + static char QUIT[] = "quit"; + double usertime; + double systime; + + if (!Purging) { + REMwrite(QUIT, strlen(QUIT), false); + REMflush(); + } + STATend = TMRnow_double(); + if (GetResourceUsage(&usertime, &systime) < 0) { + usertime = 0; + systime = 0; + } + + if (STATprint) { + printf(STAT1, REMhost, STAToffered, STATaccepted, STATrefused, + STATrejected, STATmissing, STATacceptedsize, STATrejectedsize); + printf("\n"); + printf(STAT2, REMhost, usertime, systime, STATend - STATbegin); + printf("\n"); + } + + syslog(L_NOTICE, STAT1, REMhost, STAToffered, STATaccepted, STATrefused, + STATrejected, STATmissing, STATacceptedsize, STATrejectedsize); + syslog(L_NOTICE, STAT2, REMhost, usertime, systime, STATend - STATbegin); + if (retries) + syslog(L_NOTICE, "%s %lu Streaming retries", REMhost, retries); + + if (BATCHfp != NULL && unlink(BATCHtemp) < 0 && errno != ENOENT) + syswarn("cannot remove %s", BATCHtemp); + sleep(1); + SMshutdown(); + HISclose(History); + exit(x); + /* NOTREACHED */ +} + + +/* +** Close the batchfile and the temporary file, and rename the temporary +** to be the batchfile. +*/ +static void +CloseAndRename(void) +{ + /* Close the files, rename the temporary. */ + if (BATCHqp) { + QIOclose(BATCHqp); + BATCHqp = NULL; + } + if (ferror(BATCHfp) + || fflush(BATCHfp) == EOF + || fclose(BATCHfp) == EOF) { + unlink(BATCHtemp); + syswarn("cannot close %s", BATCHtemp); + ExitWithStats(1); + } + if (rename(BATCHtemp, BATCHname) < 0) { + syswarn("cannot rename %s", BATCHtemp); + ExitWithStats(1); + } +} + + +/* +** Requeue an article, opening the temp file if we have to. If we get +** a file write error, exit so that the original input is left alone. +*/ +static void +Requeue(const char *Article, const char *MessageID) +{ + int fd; + + /* Temp file already open? */ + if (BATCHfp == NULL) { + fd = mkstemp(BATCHtemp); + if (fd < 0) { + syswarn("cannot create a temporary file"); + ExitWithStats(1); + } + BATCHfp = fdopen(fd, "w"); + if (BATCHfp == NULL) { + syswarn("cannot open %s", BATCHtemp); + ExitWithStats(1); + } + } + + /* Called only to get the file open? */ + if (Article == NULL) + return; + + if (MessageID != NULL) + fprintf(BATCHfp, "%s %s\n", Article, MessageID); + else + fprintf(BATCHfp, "%s\n", Article); + if (fflush(BATCHfp) == EOF || ferror(BATCHfp)) { + syswarn("cannot requeue %s", Article); + ExitWithStats(1); + } +} + + +/* +** Requeue an article then copy the rest of the batch file out. +*/ +static void +RequeueRestAndExit(char *Article, char *MessageID) { + char *p; + + if (!AlwaysRewrite + && STATaccepted == 0 && STATrejected == 0 && STATrefused == 0 + && STATmissing == 0) { + warn("nothing sent -- leaving batchfile alone"); + ExitWithStats(1); + } + + warn("rewriting batch file and exiting"); + if (CanStream) { /* streaming mode has a buffer of articles */ + int i; + + for (i = 0; i < STNBUF; i++) { /* requeue unacknowledged articles */ + if ((stbuf[i].st_fname) && (stbuf[i].st_fname[0] != '\0')) { + if (Debug) + fprintf(stderr, "stbuf[%d]= %s, %s\n", + i, stbuf[i].st_fname, stbuf[i].st_id); + Requeue(stbuf[i].st_fname, stbuf[i].st_id); + if (Article == stbuf[i].st_fname) Article = NULL; + strel(i); /* release entry */ + } + } + } + Requeue(Article, MessageID); + + for ( ; BATCHqp; ) { + if ((p = QIOread(BATCHqp)) == NULL) { + if (QIOtoolong(BATCHqp)) { + warn("skipping long line in %s", BATCHname); + QIOread(BATCHqp); + continue; + } + if (QIOerror(BATCHqp)) { + syswarn("cannot read %s", BATCHname); + ExitWithStats(1); + } + + /* Normal EOF. */ + break; + } + + if (fprintf(BATCHfp, "%s\n", p) == EOF + || ferror(BATCHfp)) { + syswarn("cannot requeue %s", p); + ExitWithStats(1); + } + } + + CloseAndRename(); + ExitWithStats(1); +} + + +/* +** Clean up the NNTP escapes from a line. +*/ +static char * +REMclean(char *buff) { + char *p; + + if ((p = strchr(buff, '\r')) != NULL) + *p = '\0'; + if ((p = strchr(buff, '\n')) != NULL) + *p = '\0'; + + /* The dot-escape is only in text, not command responses. */ + return buff; +} + + +/* +** Read a line of input, with timeout. Also handle \r\n-->\n mapping +** and the dot escape. Return true if okay, *or we got interrupted.* +*/ +static bool +REMread(char *start, int size) { + static int count; + static char buffer[BUFSIZ]; + static char *bp; + char *p; + char *q; + char *end; + struct timeval t; + fd_set rmask; + int i; + char c; + + if (!REMflush()) + return false; + + for (p = start, end = &start[size - 1]; ; ) { + if (count == 0) { + /* Fill the buffer. */ + Again: + FD_ZERO(&rmask); + FD_SET(FromServer, &rmask); + t.tv_sec = 10 * 60; + t.tv_usec = 0; + i = select(FromServer + 1, &rmask, NULL, NULL, &t); + if (GotInterrupt) + return true; + if (i < 0) { + if (errno == EINTR) + goto Again; + return false; + } + if (i == 0 || !FD_ISSET(FromServer, &rmask)) + return false; + count = read(FromServer, buffer, sizeof buffer); + if (GotInterrupt) + return true; + if (count <= 0) + return false; + bp = buffer; + } + + /* Process next character. */ + count--; + c = *bp++; + if (c == '\n') + break; + if (p < end) + *p++ = c; + } + + /* We know we got \n; if previous char was \r, turn it into \n. */ + if (p > start && p < end && p[-1] == '\r') + p[-1] = '\n'; + *p = '\0'; + + /* Handle the dot escape. */ + if (*p == '.') { + if (p[1] == '\n' && p[2] == '\0') + /* EOF. */ + return false; + for (q = &start[1]; (*p++ = *q++) != '\0'; ) + continue; + } + return true; +} + + +/* +** Handle the interrupt. +*/ +static void +Interrupted(char *Article, char *MessageID) { + warn("interrupted"); + RequeueRestAndExit(Article, MessageID); +} + + +/* +** Returns the length of the headers. +*/ +static int +HeadersLen(ARTHANDLE *art, int *iscmsg) { + const char *p; + char lastchar = -1; + + /* from nnrpd/article.c ARTsendmmap() */ + for (p = art->data; p < (art->data + art->len); p++) { + if (*p == '\r') + continue; + if (*p == '\n') { + if (lastchar == '\n') { + if (*(p-1) == '\r') + p--; + break; + } + if (*(p + 1) == 'C' && strncasecmp(p + 1, "Control: ", 9) == 0) + *iscmsg = 1; + } + lastchar = *p; + } + return (p - art->data); +} + + +/* +** Send a whole article to the server. +*/ +static bool +REMsendarticle(char *Article, char *MessageID, ARTHANDLE *art) { + char buff[NNTP_STRLEN]; + + if (!REMflush()) + return false; + if (HeadersFeed) { + struct iovec vec[3]; + char buf[20]; + int iscmsg = 0; + int len = HeadersLen(art, &iscmsg); + + vec[0].iov_base = (char *) art->data; + vec[0].iov_len = len; + /* Add 14 bytes, which maybe will be the length of the Bytes header */ + snprintf(buf, sizeof(buf), "Bytes: %lu\r\n", + (unsigned long) art->len + 14); + vec[1].iov_base = buf; + vec[1].iov_len = strlen(buf); + if (iscmsg) { + vec[2].iov_base = (char *) art->data + len; + vec[2].iov_len = art->len - len; + } else { + vec[2].iov_base = (char *) "\r\n.\r\n"; + vec[2].iov_len = 5; + } + if (xwritev(ToServer, vec, 3) < 0) + return false; + } else + if (xwrite(ToServer, art->data, art->len) < 0) + return false; + if (GotInterrupt) + Interrupted(Article, MessageID); + if (Debug) { + fprintf(stderr, "> [ article %lu ]\n", (unsigned long) art->len); + fprintf(stderr, "> .\n"); + } + + if (CanStream) return true; /* streaming mode does not wait for ACK */ + + /* What did the remote site say? */ + if (!REMread(buff, (int)sizeof buff)) { + syswarn("no reply after sending %s", Article); + return false; + } + if (GotInterrupt) + Interrupted(Article, MessageID); + if (Debug) + fprintf(stderr, "< %s", buff); + + /* Parse the reply. */ + switch (atoi(buff)) { + default: + warn("unknown reply after %s -- %s", Article, buff); + if (DoRequeue) + Requeue(Article, MessageID); + break; + case NNTP_BAD_COMMAND_VAL: + case NNTP_SYNTAX_VAL: + case NNTP_ACCESS_VAL: + /* The receiving server is likely confused...no point in continuing */ + syslog(L_FATAL, GOT_BADCOMMAND, REMhost, MessageID, REMclean(buff)); + RequeueRestAndExit(Article, MessageID); + /* NOTREACHED */ + case NNTP_RESENDIT_VAL: + case NNTP_GOODBYE_VAL: + Requeue(Article, MessageID); + break; + case NNTP_TOOKIT_VAL: + STATaccepted++; + STATacceptedsize += (double)art->len; + break; + case NNTP_REJECTIT_VAL: + if (logRejects) + syslog(L_NOTICE, REJECTED, REMhost, + MessageID, Article, REMclean(buff)); + STATrejected++; + STATrejectedsize += (double)art->len; + break; + } + + /* Article sent, or we requeued it. */ + return true; +} + + +/* +** Get the Message-ID header from an open article. +*/ +static char * +GetMessageID(ARTHANDLE *art) { + static char *buff; + static int buffsize = 0; + const char *p, *q; + + p = wire_findheader(art->data, art->len, "Message-ID"); + if (p == NULL) + return NULL; + for (q = p; q < art->data + art->len; q++) { + if (*q == '\r' || *q == '\n') + break; + } + if (q == art->data + art->len) + return NULL; + if (buffsize < q - p) { + if (buffsize == 0) + buff = xmalloc(q - p + 1); + else + buff = xrealloc(buff, q - p + 1); + buffsize = q - p; + } + memcpy(buff, p, q - p); + buff[q - p] = '\0'; + return buff; +} + + +/* +** Mark that we got interrupted. +*/ +static RETSIGTYPE +CATCHinterrupt(int s) { + GotInterrupt = true; + + /* Let two interrupts kill us. */ + xsignal(s, SIG_DFL); +} + + +/* +** Mark that the alarm went off. +*/ +static RETSIGTYPE +CATCHalarm(int s UNUSED) +{ + GotAlarm = true; + if (JMPyes) + longjmp(JMPwhere, 1); +} + +/* check articles in streaming NNTP mode +** return true on failure. +*/ +static bool +check(int i) { + char buff[NNTP_STRLEN]; + + /* send "check " to the other system */ + snprintf(buff, sizeof(buff), "check %s", stbuf[i].st_id); + if (!REMwrite(buff, (int)strlen(buff), false)) { + syswarn("cannot check article"); + return true; + } + STAToffered++; + if (Debug) { + if (stbuf[i].st_retry) + fprintf(stderr, "> %s (retry %d)\n", buff, stbuf[i].st_retry); + else + fprintf(stderr, "> %s\n", buff); + } + if (GotInterrupt) + Interrupted(stbuf[i].st_fname, stbuf[i].st_id); + + /* That all. Response is checked later by strlisten() */ + return false; +} + +/* Send article in "takethis streaming NNTP mode. +** return true on failure. +*/ +static bool +takethis(int i) { + char buff[NNTP_STRLEN]; + + if (!stbuf[i].art) { + warn("internal error: null article for %s in takethis", + stbuf[i].st_fname); + return true; + } + /* send "takethis " to the other system */ + snprintf(buff, sizeof(buff), "takethis %s", stbuf[i].st_id); + if (!REMwrite(buff, (int)strlen(buff), false)) { + syswarn("cannot send takethis"); + return true; + } + if (Debug) + fprintf(stderr, "> %s\n", buff); + if (GotInterrupt) + Interrupted((char *)0, (char *)0); + if (!REMsendarticle(stbuf[i].st_fname, stbuf[i].st_id, stbuf[i].art)) + return true; + stbuf[i].st_size = stbuf[i].art->len; + article_free(stbuf[i].art); /* should not need file again */ + stbuf[i].art = 0; /* so close to free descriptor */ + stbuf[i].st_age = 0; + /* That all. Response is checked later by strlisten() */ + return false; +} + + +/* listen for responses. Process acknowledgments to remove items from +** the queue. Also sends the articles on request. Returns true on error. +** return true on failure. +*/ +static bool +strlisten(void) +{ + int resp; + int i; + char *id, *p; + char buff[NNTP_STRLEN]; + int hash; + + while(true) { + if (!REMread(buff, (int)sizeof buff)) { + syswarn("no reply to check"); + return true; + } + if (GotInterrupt) + Interrupted((char *)0, (char *)0); + if (Debug) + fprintf(stderr, "< %s", buff); + + /* Parse the reply. */ + resp = atoi(buff); + /* Skip the 1XX informational messages */ + if ((resp >= 100) && (resp < 200)) continue; + switch (resp) { /* first time is to verify it */ + case NNTP_ERR_GOTID_VAL: + case NNTP_OK_SENDID_VAL: + case NNTP_OK_RECID_VAL: + case NNTP_ERR_FAILID_VAL: + case NNTP_RESENDID_VAL: + if ((id = strchr(buff, '<')) != NULL) { + p = strchr(id, '>'); + if (p) *(p+1) = '\0'; + hash = stidhash(id); + i = stindex(id, hash); /* find table entry */ + if (i < 0) { /* should not happen */ + syslog(L_NOTICE, CANT_FINDIT, REMhost, REMclean(buff)); + return (true); /* can't find it! */ + } + } else { + syslog(L_NOTICE, CANT_PARSEIT, REMhost, REMclean(buff)); + return (true); + } + break; + case NNTP_GOODBYE_VAL: + /* Most likely out of space -- no point in continuing. */ + syslog(L_NOTICE, IHAVE_FAIL, REMhost, REMclean(buff)); + return true; + /* NOTREACHED */ + default: + syslog(L_NOTICE, UNEXPECTED, REMhost, REMclean(buff)); + if (Debug) + fprintf(stderr, "Unknown reply \"%s\"", + buff); + return (true); + } + switch (resp) { /* now we take some action */ + case NNTP_RESENDID_VAL: /* remote wants it later */ + /* try again now because time has passed */ + if (stbuf[i].st_retry < STNRETRY) { + if (check(i)) return true; + stbuf[i].st_retry++; + stbuf[i].st_age = 0; + } else { /* requeue to disk for later */ + Requeue(stbuf[i].st_fname, stbuf[i].st_id); + strel(i); /* release entry */ + } + break; + case NNTP_ERR_GOTID_VAL: /* remote doesn't want it */ + strel(i); /* release entry */ + STATrefused++; + stnofail = 0; + break; + + case NNTP_OK_SENDID_VAL: /* remote wants article */ + if (takethis(i)) return true; + stnofail++; + break; + + case NNTP_OK_RECID_VAL: /* remote received it OK */ + STATacceptedsize += (double) stbuf[i].st_size; + strel(i); /* release entry */ + STATaccepted++; + break; + + case NNTP_ERR_FAILID_VAL: + STATrejectedsize += (double) stbuf[i].st_size; + if (logRejects) + syslog(L_NOTICE, REJ_STREAM, REMhost, + stbuf[i].st_fname, REMclean(buff)); +/* XXXXX Caution THERE BE DRAGONS, I don't think this logs properly + The message ID is returned in the peer response... so this is redundant + stbuf[i].st_id, stbuf[i].st_fname, REMclean(buff)); */ + strel(i); /* release entry */ + STATrejected++; + stnofail = 0; + break; + } + break; + } + return (false); +} + +/* +** Print a usage message and exit. +*/ +static void +Usage(void) +{ + die("Usage: innxmit [-acdHlprs] [-t#] [-T#] host file"); +} + + +/* +** Open an article. If the argument is a token, retrieve the article via +** the storage API. Otherwise, open the file and fake up an ARTHANDLE for +** it. Only fill in those fields that we'll need. Articles not retrieved +** via the storage API will have a type of TOKEN_EMPTY. +*/ +static ARTHANDLE * +article_open(const char *path, const char *id) +{ + TOKEN token; + ARTHANDLE *article; + int fd, length; + struct stat st; + char *p; + + if (IsToken(path)) { + token = TextToToken(path); + article = SMretrieve(token, RETR_ALL); + if (article == NULL) { + if (SMerrno == SMERR_NOENT || SMerrno == SMERR_UNINIT) + STATmissing++; + else { + warn("requeue %s: %s", path, SMerrorstr); + Requeue(path, id); + } + } + return article; + } else { + char *data; + fd = open(path, O_RDONLY); + if (fd < 0) + return NULL; + if (fstat(fd, &st) < 0) { + syswarn("requeue %s", path); + Requeue(path, id); + return NULL; + } + article = xmalloc(sizeof(ARTHANDLE)); + article->type = TOKEN_EMPTY; + article->len = st.st_size; + data = xmalloc(article->len); + if (xread(fd, data, article->len) < 0) { + syswarn("requeue %s", path); + free(data); + free(article); + close(fd); + Requeue(path, id); + return NULL; + } + close(fd); + p = memchr(data, '\n', article->len); + if (p == NULL || p == data) { + warn("requeue %s: cannot find headers", path); + free(data); + free(article); + Requeue(path, id); + return NULL; + } + if (p[-1] != '\r') { + p = ToWireFmt(data, article->len, (size_t *)&length); + free(data); + data = p; + article->len = length; + } + article->data = data; + return article; + } +} + + +/* +** Free an article, using the type field to determine whether to free it +** via the storage API. +*/ +static void +article_free(ARTHANDLE *article) +{ + if (article->type == TOKEN_EMPTY) { + free((char *)article->data); + free(article); + } else + SMfreearticle(article); +} + + +int main(int ac, char *av[]) { + static char SKIPPING[] = "Skipping \"%s\" --%s?\n"; + int i; + char *p; + ARTHANDLE *art; + FILE *From; + FILE *To; + char buff[8192+128]; + char *Article; + char *MessageID; + RETSIGTYPE (*old)(int) = NULL; + unsigned int ConnectTimeout; + unsigned int TotalTimeout; + int port = NNTP_PORT; + bool val; + char *path; + + openlog("innxmit", L_OPENLOG_FLAGS | LOG_PID, LOG_INN_PROG); + message_program_name = "innxmit"; + + /* Set defaults. */ + if (!innconf_read(NULL)) + exit(1); + + ConnectTimeout = 0; + TotalTimeout = 0; + + umask(NEWSUMASK); + + /* Parse JCL. */ + while ((i = getopt(ac, av, "lacdHprst:T:vP:")) != EOF) + switch (i) { + default: + Usage(); + /* NOTREACHED */ + case 'P': + port = atoi(optarg); + break; + case 'a': + AlwaysRewrite = true; + break; + case 'c': + DoCheck = false; + break; + case 'd': + Debug = true; + break; + case 'H': + HeadersFeed = true; + break; + case 'l': + logRejects = true ; + break ; + case 'p': + AlwaysRewrite = true; + Purging = true; + break; + case 'r': + DoRequeue = false; + break; + case 's': + TryStream = false; + break; + case 't': + ConnectTimeout = atoi(optarg); + break; + case 'T': + TotalTimeout = atoi(optarg); + break; + case 'v': + STATprint = true; + break; + } + ac -= optind; + av += optind; + + /* Parse arguments; host and filename. */ + if (ac != 2) + Usage(); + REMhost = av[0]; + BATCHname = av[1]; + + if (chdir(innconf->patharticles) < 0) + sysdie("cannot cd to %s", innconf->patharticles); + + val = true; + if (!SMsetup(SM_PREOPEN,(void *)&val)) + die("cannot set up the storage manager"); + if (!SMinit()) + die("cannot initialize the storage manager: %s", SMerrorstr); + + /* Open the batch file and lock others out. */ + if (BATCHname[0] != '/') { + BATCHname = concatpath(innconf->pathoutgoing, av[1]); + } + if (((i = open(BATCHname, O_RDWR)) < 0) || ((BATCHqp = QIOfdopen(i)) == NULL)) { + syswarn("cannot open %s", BATCHname); + SMshutdown(); + exit(1); + } + if (!inn_lock_file(QIOfileno(BATCHqp), INN_LOCK_WRITE, true)) { +#if defined(EWOULDBLOCK) + if (errno == EWOULDBLOCK) { + SMshutdown(); + exit(0); + } +#endif /* defined(EWOULDBLOCK) */ + syswarn("cannot lock %s", BATCHname); + SMshutdown(); + exit(1); + } + + /* Get a temporary name in the same directory as the batch file. */ + p = strrchr(BATCHname, '/'); + *p = '\0'; + BATCHtemp = concatpath(BATCHname, "bchXXXXXX"); + *p = '/'; + + /* Set up buffer used by REMwrite. */ + REMbuffer = xmalloc(OUTPUT_BUFFER_SIZE); + REMbuffend = &REMbuffer[OUTPUT_BUFFER_SIZE]; + REMbuffptr = REMbuffer; + + /* Start timing. */ + STATbegin = TMRnow_double(); + + if (!Purging) { + /* Open a connection to the remote server. */ + if (ConnectTimeout) { + GotAlarm = false; + old = xsignal(SIGALRM, CATCHalarm); + if (setjmp(JMPwhere)) { + warn("cannot connect to %s: timed out", REMhost); + SMshutdown(); + exit(1); + } + JMPyes = true; + alarm(ConnectTimeout); + } + if (NNTPconnect(REMhost, port, &From, &To, buff) < 0 || GotAlarm) { + i = errno; + warn("cannot connect to %s: %s", REMhost, + buff[0] ? REMclean(buff) : strerror(errno)); + if (GotAlarm) + syslog(L_NOTICE, CANT_CONNECT, REMhost, "timeout"); + else + syslog(L_NOTICE, CANT_CONNECT, REMhost, + buff[0] ? REMclean(buff) : strerror(i)); + SMshutdown(); + exit(1); + } + if (Debug) + fprintf(stderr, "< %s\n", REMclean(buff)); + if (NNTPsendpassword(REMhost, From, To) < 0 || GotAlarm) { + i = errno; + syswarn("cannot authenticate with %s", REMhost); + syslog(L_ERROR, CANT_AUTHENTICATE, + REMhost, GotAlarm ? "timeout" : strerror(i)); + /* Don't send quit; we want the remote to print a message. */ + SMshutdown(); + exit(1); + } + if (ConnectTimeout) { + alarm(0); + xsignal(SIGALRM, old); + JMPyes = false; + } + + /* We no longer need standard I/O. */ + FromServer = fileno(From); + ToServer = fileno(To); + + if (TryStream) { + if (!REMwrite(modestream, (int)strlen(modestream), false)) { + syswarn("cannot negotiate %s", modestream); + } + if (Debug) + fprintf(stderr, ">%s\n", modestream); + /* Does he understand mode stream? */ + if (!REMread(buff, (int)sizeof buff)) { + syswarn("no reply to %s", modestream); + } else { + if (Debug) + fprintf(stderr, "< %s", buff); + + /* Parse the reply. */ + switch (atoi(buff)) { + default: + warn("unknown reply to %s -- %s", modestream, buff); + CanStream = false; + break; + case NNTP_OK_STREAM_VAL: /* YES! */ + CanStream = true; + break; + case NNTP_AUTH_NEEDED_VAL: /* authentication refusal */ + case NNTP_BAD_COMMAND_VAL: /* normal refusal */ + CanStream = false; + break; + } + } + if (CanStream) { + for (i = 0; i < STNBUF; i++) { /* reset buffers */ + stbuf[i].st_fname = 0; + stbuf[i].st_id = 0; + stbuf[i].art = 0; + } + stnq = 0; + } + } + if (HeadersFeed) { + if (!REMwrite(modeheadfeed, strlen(modeheadfeed), false)) + syswarn("cannot negotiate %s", modeheadfeed); + if (Debug) + fprintf(stderr, ">%s\n", modeheadfeed); + if (!REMread(buff, sizeof buff)) { + syswarn("no reply to %s", modeheadfeed); + } else { + if (Debug) + fprintf(stderr, "< %s", buff); + + /* Parse the reply. */ + switch (atoi(buff)) { + case 250: /* YES! */ + break; + case NNTP_BAD_COMMAND_VAL: /* normal refusal */ + die("%s not allowed -- %s", modeheadfeed, buff); + default: + die("unknown reply to %s -- %s", modeheadfeed, buff); + } + } + } + } + + /* Set up signal handlers. */ + xsignal(SIGHUP, CATCHinterrupt); + xsignal(SIGINT, CATCHinterrupt); + xsignal(SIGTERM, CATCHinterrupt); + xsignal(SIGPIPE, SIG_IGN); + if (TotalTimeout) { + xsignal(SIGALRM, CATCHalarm); + alarm(TotalTimeout); + } + + path = concatpath(innconf->pathdb, _PATH_HISTORY); + History = HISopen(path, innconf->hismethod, HIS_RDONLY); + free(path); + + /* Main processing loop. */ + GotInterrupt = false; + GotAlarm = false; + for (Article = NULL, MessageID = NULL; ; ) { + if (GotAlarm) { + warn("timed out"); + /* Don't resend the current article. */ + RequeueRestAndExit((char *)NULL, (char *)NULL); + } + if (GotInterrupt) + Interrupted(Article, MessageID); + + if ((Article = QIOread(BATCHqp)) == NULL) { + if (QIOtoolong(BATCHqp)) { + warn("skipping long line in %s", BATCHname); + QIOread(BATCHqp); + continue; + } + if (QIOerror(BATCHqp)) { + syswarn("cannot read %s", BATCHname); + ExitWithStats(1); + } + + /* Normal EOF -- we're done. */ + QIOclose(BATCHqp); + BATCHqp = NULL; + break; + } + + /* Ignore blank lines. */ + if (*Article == '\0') + continue; + + /* Split the line into possibly two fields. */ + if (Article[0] == '/' + && Article[strlen(innconf->patharticles)] == '/' + && strncmp(Article, innconf->patharticles, strlen(innconf->patharticles)) == 0) + Article += strlen(innconf->patharticles) + 1; + if ((MessageID = strchr(Article, ' ')) != NULL) { + *MessageID++ = '\0'; + if (*MessageID != '<' + || (p = strrchr(MessageID, '>')) == NULL + || *++p != '\0') { + warn("ignoring line %s %s...", Article, MessageID); + continue; + } + } + + if (*Article == '\0') { + if (MessageID) + warn("empty file name for %s in %s", MessageID, BATCHname); + else + warn("empty file name, no message ID in %s", BATCHname); + /* We could do a history lookup. */ + continue; + } + + if (Purging && MessageID != NULL && !Expired(MessageID)) { + Requeue(Article, MessageID); + continue; + } + + /* Drop articles with a message ID longer than NNTP_MSGID_MAXLEN to + avoid overrunning buffers and throwing the server on the + receiving end a blow from behind. */ + if (MessageID != NULL && strlen(MessageID) > NNTP_MSGID_MAXLEN) { + warn("dropping article in %s: long message ID %s", BATCHname, + MessageID); + continue; + } + + art = article_open(Article, MessageID); + if (art == NULL) + continue; + + if (Purging) { + article_free(art); + Requeue(Article, MessageID); + continue; + } + + /* Get the Message-ID from the article if we need to. */ + if (MessageID == NULL) { + if ((MessageID = GetMessageID(art)) == NULL) { + warn(SKIPPING, Article, "no message ID"); + article_free(art); + continue; + } + } + if (GotInterrupt) + Interrupted(Article, MessageID); + + /* Offer the article. */ + if (CanStream) { + int lim; + int hash; + + hash = stidhash(MessageID); + if (stindex(MessageID, hash) >= 0) { /* skip duplicates in queue */ + if (Debug) + fprintf(stderr, "Skipping duplicate ID %s\n", + MessageID); + article_free(art); + continue; + } + /* This code tries to optimize by sending a burst of "check" + * commands before flushing the buffer. This should result + * in several being sent in one packet reducing the network + * overhead. + */ + if (DoCheck && (stnofail < STNC)) lim = STNBUF; + else lim = STNBUFL; + if (stnq >= lim) { /* need to empty a buffer */ + while (stnq >= STNBUFL) { /* or several */ + if (strlisten()) { + RequeueRestAndExit(Article, MessageID); + } + } + } + /* save new article in the buffer */ + i = stalloc(Article, MessageID, art, hash); + if (i < 0) { + article_free(art); + RequeueRestAndExit(Article, MessageID); + } + if (DoCheck && (stnofail < STNC)) { + if (check(i)) { + RequeueRestAndExit((char *)NULL, (char *)NULL); + } + } else { + STAToffered++ ; + if (takethis(i)) { + RequeueRestAndExit((char *)NULL, (char *)NULL); + } + } + /* check for need to resend any IDs */ + for (i = 0; i < STNBUF; i++) { + if ((stbuf[i].st_fname) && (stbuf[i].st_fname[0] != '\0')) { + if (stbuf[i].st_age++ > stnq) { + /* This should not happen but just in case ... */ + if (stbuf[i].st_retry < STNRETRY) { + if (check(i)) /* resend check */ + RequeueRestAndExit((char *)NULL, (char *)NULL); + retries++; + stbuf[i].st_retry++; + stbuf[i].st_age = 0; + } else { /* requeue to disk for later */ + Requeue(stbuf[i].st_fname, stbuf[i].st_id); + strel(i); /* release entry */ + } + } + } + } + continue; /* next article */ + } + snprintf(buff, sizeof(buff), "ihave %s", MessageID); + if (!REMwrite(buff, (int)strlen(buff), false)) { + syswarn("cannot offer article"); + article_free(art); + RequeueRestAndExit(Article, MessageID); + } + STAToffered++; + if (Debug) + fprintf(stderr, "> %s\n", buff); + if (GotInterrupt) + Interrupted(Article, MessageID); + + /* Does he want it? */ + if (!REMread(buff, (int)sizeof buff)) { + syswarn("no reply to ihave"); + article_free(art); + RequeueRestAndExit(Article, MessageID); + } + if (GotInterrupt) + Interrupted(Article, MessageID); + if (Debug) + fprintf(stderr, "< %s", buff); + + /* Parse the reply. */ + switch (atoi(buff)) { + default: + warn("unknown reply to %s -- %s", Article, buff); + if (DoRequeue) + Requeue(Article, MessageID); + break; + case NNTP_BAD_COMMAND_VAL: + case NNTP_SYNTAX_VAL: + case NNTP_ACCESS_VAL: + /* The receiving server is likely confused...no point in continuing */ + syslog(L_FATAL, GOT_BADCOMMAND, REMhost, MessageID, REMclean(buff)); + RequeueRestAndExit(Article, MessageID); + /* NOTREACHED */ + case NNTP_AUTH_NEEDED_VAL: + case NNTP_RESENDIT_VAL: + case NNTP_GOODBYE_VAL: + /* Most likely out of space -- no point in continuing. */ + syslog(L_NOTICE, IHAVE_FAIL, REMhost, REMclean(buff)); + RequeueRestAndExit(Article, MessageID); + /* NOTREACHED */ + case NNTP_SENDIT_VAL: + if (!REMsendarticle(Article, MessageID, art)) + RequeueRestAndExit(Article, MessageID); + break; + case NNTP_HAVEIT_VAL: + STATrefused++; + break; +#if defined(NNTP_SENDIT_LATER) + case NNTP_SENDIT_LATER_VAL: + Requeue(Article, MessageID); + break; +#endif /* defined(NNTP_SENDIT_LATER) */ + } + + article_free(art); + } + if (CanStream) { /* need to wait for rest of ACKs */ + while (stnq > 0) { + if (strlisten()) { + RequeueRestAndExit((char *)NULL, (char *)NULL); + } + } + } + + if (BATCHfp != NULL) + /* We requeued something, so close the temp file. */ + CloseAndRename(); + else if (unlink(BATCHname) < 0 && errno != ENOENT) + syswarn("cannot remove %s", BATCHtemp); + ExitWithStats(0); + /* NOTREACHED */ + return 0; +} diff --git a/backends/map.c b/backends/map.c new file mode 100644 index 0000000..5d5bcb9 --- /dev/null +++ b/backends/map.c @@ -0,0 +1,99 @@ +/* $Id: map.c 6135 2003-01-19 01:15:40Z rra $ +** +*/ + +#include "config.h" +#include "clibrary.h" +#include + +#include "libinn.h" +#include "paths.h" + +#include "map.h" + + +typedef struct _PAIR { + char First; + char *Key; + char *Value; +} PAIR; + +static PAIR *MAPdata; +static PAIR *MAPend; + + +/* +** Free the map. +*/ +void +MAPfree(void) +{ + PAIR *mp; + + for (mp = MAPdata; mp < MAPend; mp++) { + free(mp->Key); + free(mp->Value); + } + free(MAPdata); + MAPdata = NULL; +} + + +/* +** Read the map file. +*/ +void +MAPread(const char *name) +{ + FILE *F; + int i; + PAIR *mp; + char *p; + char buff[BUFSIZ]; + + if (MAPdata != NULL) + MAPfree(); + + /* Open file, count lines. */ + if ((F = fopen(name, "r")) == NULL) { + fprintf(stderr, "Can't open %s, %s\n", name, strerror(errno)); + exit(1); + } + for (i = 0; fgets(buff, sizeof buff, F) != NULL; i++) + continue; + mp = MAPdata = xmalloc((i + 1) * sizeof(PAIR)); + + /* Read each line; ignore blank and comment lines. */ + fseeko(F, 0, SEEK_SET); + while (fgets(buff, sizeof buff, F) != NULL) { + if ((p = strchr(buff, '\n')) != NULL) + *p = '\0'; + if (buff[0] == '\0' + || buff[0] == '#' + || (p = strchr(buff, ':')) == NULL) + continue; + *p++ = '\0'; + mp->First = buff[0]; + mp->Key = xstrdup(buff); + mp->Value = xstrdup(p); + mp++; + } + fclose(F); + MAPend = mp; +} + + +/* +** Look up a name in the map, return original value if not found. +*/ +char * +MAPname(char *p) +{ + PAIR *mp; + char c; + + for (c = *p, mp = MAPdata; mp < MAPend; mp++) + if (c == mp->First && strcmp(p, mp->Key) == 0) + return mp->Value; + return p; +} diff --git a/backends/map.h b/backends/map.h new file mode 100644 index 0000000..c6fd6e7 --- /dev/null +++ b/backends/map.h @@ -0,0 +1,7 @@ +/* $Id: map.h 5292 2002-03-10 08:59:54Z vinocur $ +** +*/ + +void MAPfree(void); /* free the map */ +void MAPread(const char *name); /* read the map file */ +char *MAPname(char *p); /* lookup in the map */ diff --git a/backends/mod-active.in b/backends/mod-active.in new file mode 100644 index 0000000..4360e0b --- /dev/null +++ b/backends/mod-active.in @@ -0,0 +1,115 @@ +#! /usr/bin/perl +# fixscript will replace this line with require innshellvars.pl + +# batch-active-update +# Author: David Lawrence + +# Reads a series of ctlinnd newgroup/rmgroup/changegroup commands, such as +# is output by checkgroups and actsync, and efficiently handles them all at +# once. Input can come from command line files or stdin, a la awk/sed. + +$oldact = $inn::active; # active file location +$oldact = $inn::active; # active file location (same; shut up, perl -w) +$newact = "$oldact.new$$"; # temporary name for new active file +$actime = "$oldact.times"; # active.times file +$pausemsg = 'batch active update, ok'; # message to be used for pausing? +$diff_flags = ''; # Flags for diff(1); default chosen if null. + +$0 =~ s#^.*/##; + +die "$0: must run as $inn::newsuser user" + unless $> == (getpwnam($inn::newsuser))[2]; + +$debug = -t STDOUT ? 1 : 0; + +$| = 1; # show output as it happens (for an rsh/ssh pipe) + +# Guess at best flags for a condensed diff listing. The +# checks for alternative operating systems is incomplete. +unless ($diff_flags) { + if (`diff -v 2>&1` =~ /GNU/) { + $diff_flags = '-U0'; + } elsif ($^O =~ /^(dec_osf|solaris)$/) { + $diff_flags = '-C0'; + } elsif ($^O eq 'nextstep') { + $diff_flags = '-c0'; + } else { + $diff_flags = '-c'; + } +} + +print "reading list of groups to update\n" if $debug; + +$eval = "while () {\n"; +$eval .= " \$group = (split)[0];\n"; + +while (<>) { + if (/^\s*\S*ctlinnd newgroup (\S+) (\S)/) { + $toadd{$1} = $2; + } elsif (/^\s*\S*ctlinnd rmgroup (\S+)/) { + $eval .= " next if \$group eq '$1';\n"; + } elsif (/^\s*\S*ctlinnd changegroup (\S+) (\S)/) { + $eval .= " s/ \\S+\$/ $2/ if \$group eq '$1';\n"; + } +} + +$eval .= " delete \$toadd{\$group};\n"; +$eval .= " if (!print(NEWACT \$_)) {\n"; +$eval .= " die \"\$0: writing \$newact failed (\$!), aborting\\n\";\n"; +$eval .= " }\n"; +$eval .= "}\n"; + +&ctlinnd("pause $pausemsg"); + +open(OLDACT, "< $oldact") || die "$0: open $oldact: $!\n"; +open(NEWACT, "> $newact") || die "$0: open $newact: $!\n"; + +print "rewriting active file\n" if $debug; +eval $eval; +for (sort keys %toadd) { + $add = "$_ 0000000000 0000000001 $toadd{$_}\n"; + if (!print( NEWACT $add)) { + &ctlinnd("go $pausemsg"); + die "$0: writing $newact failed ($!), aborting\n"; + } +} + +close(OLDACT) || warn "$0: close $oldact: $!\n"; +close(NEWACT) || warn "$0: close $newact: $!\n"; + +if (!rename("$oldact", "$oldact.old")) { + warn "$0: rename $oldact $oldact.old: $!\n"; +} + +if (!rename("$newact", "$oldact")) { + die "$0: rename $newact $oldact: $!\n"; +} + +&ctlinnd("reload active 'updated from checkgroups'"); +system("diff $diff_flags $oldact.old $oldact"); +&ctlinnd("go $pausemsg"); + +print "updating $actime\n" if $debug; +if (open(TIMES, ">> $actime")) { + $time = time; + for (sort keys %toadd) { + print TIMES "$_ $time checkgroups-update\n" || last; + } + close(TIMES) || warn "$0: close $actime: $!\n"; +} else { + warn "$0: $actime not updated: $!\n"; +} + +exit 0; + +sub +ctlinnd + +{ + local($command) = @_; + + print "ctlinnd $command\n" if $debug; + if (system("$inn::newsbin/ctlinnd -s $command")) { + die "$0: \"$command\" failed, aborting\n"; + } +} diff --git a/backends/news2mail.in b/backends/news2mail.in new file mode 100644 index 0000000..5f22231 --- /dev/null +++ b/backends/news2mail.in @@ -0,0 +1,153 @@ +#! /usr/bin/perl +# fixscript will replace this line with require innshellvars.pl + +# news to mail channel backend +# +# INN gives us +# @token@ addrs +# for each article that needs to be mailed. We invoke sm on the +# localhost to get the actual article and stuff +# it down sendmail's throat. +# +# This program expect to find a file that maps listname to listaddrs, +# @prefix@/etc/news2mail.cf +# which must contain address mapping pairs such as +# +# big-red-ants@ucsd.edu big-red-ants-digest@ucsd.edu +# +# where the first token is the name fed to us from INN, and which is +# also placed in the To: header of the outgoing mail. It's probably +# the subscriber's list submittal address so that replies go to the +# right place. The second token is the actual address sendmail ships +# the article to. +# +# In the INN newsfeeds file, you need to have a channel feed: +# n2m!:!*:Tc,Ac,Wn*:@prefix@/bin/news2mail +# and a site for each of the various mailing lists you're feeding, +# such as +# big-red-ants@ucsd.edu:rec.pets.redants.*:Tm:n2m! +# +# Error handling is nearly nonexistent. +# +# - Brian Kantor, UCSD Aug 1998 + +require 5.006; + +use FileHandle; +use Sys::Syslog; +use strict; + +my $cfFile = $inn::pathetc . "/news2mail.cf" ; +my $sendmail = $inn::mta ; +my $sm = $inn::pathbin . "/sm" ; +my %maddr = (); + +# +# the syslog calls are here but don't work on my system +# +openlog('news2mail', 'pid', 'mail'); + +syslog('info', 'begin'); + +# +# load the list names and their mail addresses from cf file +# #comments and blank lines are ignored +# +unless (open CF, "< $cfFile") { + syslog('notice', 'CF open failed %m'); + die "bad CF"; + } + +while ( ) { + next if /^#|^\s+$/; + my ( $ln, $ma ) = split /\s+/; + $maddr{ $ln } = $ma; + } +close CF; + +# +# for each incoming line from the INN channel +# +while ( ) { + chomp; + + syslog('info', $_); + + my ($token, $lnames) = split /\s+/, $_, 2; + my @addrs = split /\s+/, $lnames; + + my @good = grep { defined $maddr{$_} } @addrs; + my @bad = grep { !defined $maddr{$_} } @addrs; + + if (! @good) { + syslog('notice', "unknown listname $_"); + next; + } + + if (@bad) { + syslog('info', 'skipping unknown lists: ', join(' ', @bad)); + } + mailto($token, $lnames, @maddr{@good}); + } + +syslog ("info", "end") ; + +exit 0; + +sub mailto { + my($t, $l, @a) = @_ ; + + my $sendmail = $inn::mta ; + $sendmail =~ s!\s*%s!! ; + my @command = (split (' ', $sendmail), '-ee', '-fnews', '-odq', @a); +# @command[0] = '/usr/local/bin/debug'; + + syslog('info', join(' ', @command)); + + unless (open(SM, '|-', @command)) { + syslog('notice', join(' ', '|', @command), 'failed!'); + die "bad $sendmail"; + } + + my $smgr = "$sm -q $t |"; + + unless (open(SMGR, $smgr)) { + syslog('notice', "$smgr failed!"); + die "bad $smgr"; + } + + # header + while ( ) { + chomp; + + # empty line signals end of header + if ( /^$/ ) { + print SM "To: $l\n\n"; + last; + } + + # + # skip unnecessary headers + # + next if /^NNTP-Posting-Date:/i; + next if /^NNTP-Posting-Host:/i; + next if /^X-Trace:/i; + next if /^Xref:/i; + next if /^Path:/i; + + # + # convert Newsgroups header into X-Newsgroups + # + s/^Newsgroups:/X-Newsgroups:/i; + + print SM "$_\n"; + } + + # body + while ( ) { + print SM $_; + } + + close(SMGR); + close(SM); + } diff --git a/backends/ninpaths.c b/backends/ninpaths.c new file mode 100644 index 0000000..95f397d --- /dev/null +++ b/backends/ninpaths.c @@ -0,0 +1,519 @@ +/* $Id: ninpaths.c 6362 2003-05-31 18:35:04Z rra $ +** +** New inpaths reporting program. +** +** Idea, data structures and part of code based on inpaths 2.5 +** by Brian Reid, Landon Curt Noll +** +** This version written by Olaf Titz, Feb. 1997. Public domain. +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include + +#define VERSION "3.1.1" + +#define MAXFNAME 1024 /* max length of file name */ +#define MAXLINE 1024 /* max length of Path line */ +#define HASH_TBL 65536 /* hash table size (power of two) */ +#define MAXHOST 128 /* max length of host name */ +#define HOSTF "%127s" /* scanf format for host name */ +#define RECLINE 120 /* dump file line length softlimit */ + +/* structure used to tally the traffic between two hosts */ +struct trec { + struct trec *rlink; /* next in chain */ + struct nrec *linkid; /* pointer to... */ + long tally; /* count */ +}; + +/* structure to hold the information about a host */ +struct nrec { + struct nrec *link; /* next in chain */ + struct trec *rlink; /* start of trec chain */ + char *id; /* host name */ + long no; /* identificator for dump file */ + long sentto; /* tally of articles sent from here */ +}; + +struct nrec *hosthash[HASH_TBL]; + +time_t starttime; /* Start time */ +double atimes=0.0; /* Sum of articles times wrt. starttime */ +long total=0, /* Total articles processed */ + sites=0; /* Total sites known */ + +/* malloc and warn if out of mem */ +void * +wmalloc(size_t s) +{ + void *p=malloc(s); + if (!p) + fprintf(stderr, "warning: out of memory\n"); + return p; +} + +/* Hash function due to Glenn Fowler / Landon Curt Noll / Phong Vo */ +int +hash(const char *str) +{ + unsigned long val; + unsigned long c; + + for (val = 0; (c=(unsigned long)(*str)); ++str) { + val *= 16777619; /* magic */ + val ^= c; /* more magic */ + } + return (int)(val & (unsigned long)(HASH_TBL-1)); +} + +/* Look up a host in the hash table. Add if necessary. */ +struct nrec * +hhost(const char *n) +{ + struct nrec *h; + int i=hash(n); + + for (h=hosthash[i]; h; h=h->link) + if (!strcmp(n, h->id)) + return h; + /* not there - allocate */ + h=wmalloc(sizeof(struct nrec)); + if (!h) + return NULL; + h->id=strdup(n); + if (!h->id) { + free(h); return NULL; + } + h->link=hosthash[i]; + h->rlink=NULL; + h->no=h->sentto=0; + hosthash[i]=h; + sites++; + return h; +} + +/* Look up a tally record between hosts. Add if necessary. */ +struct trec * +tallyrec(struct nrec *r, struct nrec *h) +{ + struct trec *t; + for (t=r->rlink; t; t=t->rlink) + if (t->linkid==h) + return t; + t=wmalloc(sizeof(struct trec)); + if (!t) + return NULL; + t->rlink=r->rlink; + t->linkid=h; + t->tally=0; + r->rlink=t; + return t; +} + + +/* Dump file format: + "!!NINP" "\n" + followed by S-records, + "!!NLREC\n" + [3.0] + followed by max. ^2 L-records + [3.1] + followed by max. L-records + "!!NEND" "\n" + starttime, endtime, avgtime as UNIX date + the records are separated by space or \n + an S-record is "site count" + [3.0] + an L-record is "sitea!siteb!count" + [3.1] + an L-record is ":sitea" { "!siteb,count" }... + ",count" omitted if count==1 + where sitea and siteb are numbers of the S-records starting at 0 +*/ + +int +writedump(FILE *f) +{ + int i, j; + long n; + struct nrec *h; + struct trec *t; + + if (!total) { + return -1; + } + fprintf(f, "!!NINP " VERSION " %lu %lu %ld %ld %ld\n", + (unsigned long) starttime, (unsigned long) time(NULL), sites, + total, (long)(atimes/total)+starttime); + n=j=0; + /* write the S-records (hosts), numbering them in the process */ + for (i=0; ilink) { + h->no=n++; + j+=fprintf(f, "%s %ld", h->id, h->sentto); + if (j>RECLINE) { + j=0; + fprintf(f, "\n"); + } else { + fprintf(f, " "); + } + } + if (n!=sites) + fprintf(stderr, "internal error: sites=%ld, dumped=%ld\n", sites, n); + + fprintf(f, "\n!!NLREC\n"); + + n=j=0; + /* write the L-records (links) */ + for (i=0; ilink) + if ((t=h->rlink)) { + j+=fprintf(f, ":%ld", h->no); + for (; t; t=t->rlink) { + j+=fprintf(f, "!%ld", t->linkid->no); + if (t->tally>1) + j+=fprintf(f, ",%ld", t->tally); + n++; + } + if (j>RECLINE) { + j=0; + fprintf(f, "\n"); + } + } + fprintf(f, "\n!!NLEND %ld\n", n); + return 0; +} + +/* Write dump to a named file. Substitute %d in file name with system time. */ + +void +writedumpfile(const char *n) +{ + char buf[MAXFNAME]; + FILE *d; + + if (n[0]=='-' && n[1]=='\0') { + writedump(stdout); + return; + } + snprintf(buf, sizeof(buf), n, time(0)); + d=fopen(buf, "w"); + if (d) { + if (writedump(d)<0) + unlink(buf); + } else { + perror("writedumpfile: fopen"); + } +} + +/* Read a dump file. */ + +int +readdump(FILE *f) +{ + int a, b; + long i, m, l; + unsigned long st, et, at; + long sit, tot; + struct nrec **n; + struct trec *t; + char c[MAXHOST]; + char v[16]; + + #define formerr(i) {\ + fprintf(stderr, "dump file format error #%d\n", (i)); return -1; } + + if (fscanf(f, "!!NINP %15s %lu %lu %ld %ld %lu\n", + v, &st, &et, &sit, &tot, &at)!=6) + formerr(0); + + n=calloc(sit, sizeof(struct nrec *)); + if (!n) { + fprintf(stderr, "error: out of memory\n"); + return -1; + } + for (i=0; isentto+=l; + } + if ((fscanf(f, HOSTF "\n", c)!=1) || + strcmp(c, "!!NLREC")) + formerr(2); + m=0; + if (!strncmp(v, "3.0", 3)) { + /* Read 3.0-format L-records */ + while (fscanf(f, "%d!%d!%ld ", &a, &b, &l)==3) { + t=tallyrec(n[a], n[b]); + if (!t) + return -1; + t->tally+=l; + ++m; + } + } else if (!strncmp(v, "3.1", 3)) { + /* Read L-records */ + while (fscanf(f, " :%d", &a)==1) { + while ((i=fscanf(f, "!%d,%ld", &b, &l))>0) { + t=tallyrec(n[a], n[b]); + if (i<2) + l=1; + if (!t) + return -1; + t->tally+=l; + ++m; + } + } + } else { + fprintf(stderr, "version %s ", v); + formerr(9); + } + if ((fscanf(f, "!!NLEND %ld\n", &i)!=1) + || (i!=m)) + formerr(3); +#ifdef DEBUG + fprintf(stderr, " dumped start %s total=%ld atimes=%ld (%ld)\n", + ctime(&st), tot, at, at-st); +#endif + /* Adjust the time average and total count */ + if ((unsigned long) starttime > st) { + atimes+=(double)total*(starttime-st); + starttime=st; + } + atimes+=(double)tot*(at-starttime); + total+=tot; +#ifdef DEBUG + fprintf(stderr, " current start %s total=%ld atimes=%.0f (%.0f)\n\n", + ctime(&starttime), total, atimes, atimes/total); +#endif + free(n); + return 0; +} + +/* Read dump from a file. */ + +int +readdumpfile(const char *n) +{ + FILE *d; + int i; + + if (n[0]=='-' && n[1]=='\0') + return readdump(stdin); + + d=fopen(n, "r"); + if (d) { + /* fprintf(stderr, "Reading dump file %s\n", n); */ + i=readdump(d); + fclose(d); + return i; + } else { + perror("readdumpfile: fopen"); + return -1; + } +} + + +/* Process a Path line. */ + +int +pathline(char *c) +{ + char *c2; + struct nrec *h, *r; + struct trec *t; + + r=NULL; + while (*c) { + for (c2=c; *c2 && *c2!='!'; c2++); + if (c2-c>MAXHOST-1) + /* looks broken, dont bother with rest */ + return 0; + while (*c2=='!') + *c2++='\0'; /* skip "!!" too */ + h=hhost(c); + if (!h) + return -1; + ++h->sentto; + if (r && r!=h) { + t=tallyrec(r, h); + if (!t) + return -1; + ++t->tally; + } + c=c2; + r=h; + } + return 0; +} + +/* Take Path lines from file (stdin used here). */ + +void +procpaths(FILE *f) +{ + char buf[MAXLINE]; + char *c, *ce; + int v=1; /* current line is valid */ + + while (fgets(buf, sizeof(buf), f)) { + c=buf; + if (!strncmp(c, "Path: ", 6)) + c+=6; + /* find end of line. Some broken newsreaders preload Path with + a name containing spaces. Chop off those entries. */ + for (ce=c; *ce && !CTYPE(isspace, *ce); ++ce); + if (!*ce) { + /* bogus line */ + v=0; + } else if (v) { + /* valid line */ + for (; ce>c && *ce!='!'; --ce); /* ignore last element */ + *ce='\0'; + if (pathline(c)<0) /* process it */ + /* If an out of memory condition occurs while reading + Path lines, stop reading and write the dump so far. + INN will restart a fresh ninpaths. */ + return; + /* update average age and grand total */ + atimes+=(time(0)-starttime); + ++total; + } else { + /* next line is valid */ + v=1; + } + } +} + +/* Output a report suitable for mailing. From inpaths 2.5 */ + +void +report(const char *hostname, int verbose) +{ + double avgAge; + int i, columns, needHost; + long nhosts=0, nlinks=0; + struct nrec *list, *relay; + struct trec *rlist; + char hostString[MAXHOST]; + time_t t0=time(0); + + if (!total) { + fprintf(stderr, "report: no traffic\n"); + return; + } + /* mark own site to not report it */ + list=hhost(hostname); + if (list) + list->id[0]='\0'; + + avgAge=((double)t0 - (atimes/total + (double)starttime)) /86400.0; + printf("ZCZC begin inhosts %s %s %d %ld %3.1f\n", + VERSION,hostname,verbose,total,avgAge); + for (i=0; iid[0] != 0 && list->rlink != NULL) { + if (verbose > 0 || (100*list->sentto > total)) + printf("%ld\t%s\n",list->sentto, list->id); + } + list = list->link; + } + } + printf("ZCZC end inhosts %s\n",hostname); + + printf("ZCZC begin inpaths %s %s %d %ld %3.1f\n", + VERSION,hostname,verbose,total,avgAge); + for (i=0; i 1 || (100*list->sentto > total)) { + if (list->id[0] != 0 && list->rlink != NULL) { + columns = 3+strlen(list->id); + snprintf(hostString,sizeof(hostString),"%s H ",list->id); + needHost = 1; + rlist = list->rlink; + while (rlist != NULL) { + if ( + (100*rlist->tally > total) + || ((verbose > 1)&&(5000*rlist->tally>total)) + ) { + if (needHost) printf("%s",hostString); + needHost = 0; + relay = rlist->linkid; + if (relay->id[0] != 0) { + if (columns > 70) { + printf("\n%s",hostString); + columns = 3+strlen(list->id); + } + printf("%ld Z %s U ", rlist->tally, relay->id); + columns += 9+strlen(relay->id); + } + } + rlist = rlist->rlink; + ++nlinks; + } + if (!needHost) printf("\n"); + } + } + list = list->link; + ++nhosts; + } + } + printf("ZCZC end inpaths %s\n",hostname); +#ifdef DEBUG + fprintf(stderr, "Processed %ld hosts, %ld links.\n", nhosts, nlinks); +#endif +} + +extern char *optarg; + +int +main(int argc, char *argv[]) +{ + int i; + int pf=0, vf=2; + char *df=NULL, *rf=NULL; + + for (i=0; i +#include +#include + +/* Needed on AIX 4.1 to get fd_set and friends. */ +#ifdef HAVE_SYS_SELECT_H +# include +#endif + +#include "inn/history.h" +#include "inn/innconf.h" +#include "inn/messages.h" +#include "libinn.h" +#include "nntp.h" +#include "paths.h" + +/* +** All information about a site we are connected to. +*/ +typedef struct _SITE { + char *Name; + int Rfd; + int Wfd; + char Buffer[BUFSIZ]; + char *bp; + int Count; +} SITE; + + +/* +** Global variables. +*/ +static struct iovec SITEvec[2]; +static char SITEv1[] = "\r\n"; +static char READER[] = "mode reader"; +static unsigned long STATgot; +static unsigned long STAToffered; +static unsigned long STATsent; +static unsigned long STATrejected; +static struct history *History; + + + +/* +** Read a line of input, with timeout. +*/ +static bool +SITEread(SITE *sp, char *start) +{ + char *p; + char *end; + struct timeval t; + fd_set rmask; + int i; + char c; + + for (p = start, end = &start[NNTP_STRLEN - 1]; ; ) { + if (sp->Count == 0) { + /* Fill the buffer. */ + Again: + FD_ZERO(&rmask); + FD_SET(sp->Rfd, &rmask); + t.tv_sec = DEFAULT_TIMEOUT; + t.tv_usec = 0; + i = select(sp->Rfd + 1, &rmask, NULL, NULL, &t); + if (i < 0) { + if (errno == EINTR) + goto Again; + return false; + } + if (i == 0 + || !FD_ISSET(sp->Rfd, &rmask) + || (sp->Count = read(sp->Rfd, sp->Buffer, sizeof sp->Buffer)) < 0) + return false; + if (sp->Count == 0) + return false; + sp->bp = sp->Buffer; + } + + /* Process next character. */ + sp->Count--; + c = *sp->bp++; + if (c == '\n') + break; + if (p < end) + *p++ = c; + } + + /* If last two characters are \r\n, kill the \r as well as the \n. */ + if (p > start && p < end && p[-1] == '\r') + p--; + *p = '\0'; + return true; +} + + +/* +** Send a line to the server, adding \r\n. Don't need to do dot-escape +** since it's only for sending DATA to local site, and the data we got from +** the remote site already is escaped. +*/ +static bool +SITEwrite(SITE *sp, const char *p, int i) +{ + SITEvec[0].iov_base = (char *) p; + SITEvec[0].iov_len = i; + return xwritev(sp->Wfd, SITEvec, 2) >= 0; +} + + +static SITE * +SITEconnect(char *host) +{ + FILE *From; + FILE *To; + SITE *sp; + int i; + + /* Connect and identify ourselves. */ + if (host) + i = NNTPconnect(host, NNTP_PORT, &From, &To, (char *)NULL); + else { + host = innconf->server; + if (host == NULL) + die("no server specified and server not set in inn.conf"); + i = NNTPlocalopen(&From, &To, (char *)NULL); + } + if (i < 0) + sysdie("cannot connect to %s", host); + + if (NNTPsendpassword(host, From, To) < 0) + sysdie("cannot authenticate to %s", host); + + /* Build the structure. */ + sp = xmalloc(sizeof(SITE)); + sp->Name = host; + sp->Rfd = fileno(From); + sp->Wfd = fileno(To); + sp->bp = sp->Buffer; + sp->Count = 0; + return sp; +} + + +/* +** Send "quit" to a site, and get its reply. +*/ +static void +SITEquit(SITE *sp) +{ + char buff[NNTP_STRLEN]; + + SITEwrite(sp, "quit", 4); + SITEread(sp, buff); +} + + +static bool +HIShaveit(char *mesgid) +{ + return HIScheck(History, mesgid); +} + + +static void +Usage(const char *p) +{ + warn("%s", p); + fprintf(stderr, "Usage: nntpget" + " [ -d dist -n grps [-f file | -t time -u file]] host\n"); + exit(1); +} + + +int +main(int ac, char *av[]) +{ + char buff[NNTP_STRLEN]; + char mesgid[NNTP_STRLEN]; + char tbuff[SMBUF]; + char *msgidfile = NULL; + int msgidfd; + const char *Groups; + char *distributions; + char *Since; + char *path; + int i; + struct tm *gt; + struct stat Sb; + SITE *Remote; + SITE *Local = NULL; + FILE *F; + bool Offer; + bool Error; + bool Verbose = false; + char *Update; + char *p; + + /* First thing, set up our identity. */ + message_program_name = "nntpget"; + + /* Set defaults. */ + distributions = NULL; + Groups = NULL; + Since = NULL; + Offer = false; + Update = NULL; + if (!innconf_read(NULL)) + exit(1); + + umask(NEWSUMASK); + + /* Parse JCL. */ + while ((i = getopt(ac, av, "d:f:n:t:ovu:")) != EOF) + switch (i) { + default: + Usage("bad flag"); + /* NOTREACHED */ + case 'd': + distributions = optarg; + break; + case 'u': + Update = optarg; + /* FALLTHROUGH */ + case 'f': + if (Since) + Usage("only one of -f, -t, or -u may be given"); + if (stat(optarg, &Sb) < 0) + sysdie("cannot stat %s", optarg); + gt = gmtime(&Sb.st_mtime); + /* Y2K: NNTP Spec currently allows only two digit years. */ + snprintf(tbuff, sizeof(tbuff), "%02d%02d%02d %02d%02d%02d GMT", + gt->tm_year % 100, gt->tm_mon + 1, gt->tm_mday, + gt->tm_hour, gt->tm_min, gt->tm_sec); + Since = tbuff; + break; + case 'n': + Groups = optarg; + break; + case 'o': + /* Open the history file. */ + path = concatpath(innconf->pathdb, _PATH_HISTORY); + History = HISopen(path, innconf->hismethod, HIS_RDONLY); + if (!History) + sysdie("cannot open history"); + free(path); + Offer = true; + break; + case 't': + if (Since) + Usage("only one of -t or -f may be given"); + Since = optarg; + break; + case 'v': + Verbose = true; + break; + } + ac -= optind; + av += optind; + if (ac != 1) + Usage("no host given"); + + /* Set up the scatter/gather vectors used by SITEwrite. */ + SITEvec[1].iov_base = SITEv1; + SITEvec[1].iov_len = strlen(SITEv1); + + /* Connect to the remote server. */ + if ((Remote = SITEconnect(av[0])) == NULL) + sysdie("cannot connect to %s", av[0]); + if (!SITEwrite(Remote, READER, (int)strlen(READER)) + || !SITEread(Remote, buff)) + sysdie("cannot start reading"); + + if (Since == NULL) { + F = stdin; + if (distributions || Groups) + Usage("no -d or -n flags allowed when reading stdin"); + } + else { + /* Ask the server for a list of what's new. */ + if (Groups == NULL) + Groups = "*"; + if (distributions) + snprintf(buff, sizeof(buff), "NEWNEWS %s %s <%s>", + Groups, Since, distributions); + else + snprintf(buff, sizeof(buff), "NEWNEWS %s %s", Groups, Since); + if (!SITEwrite(Remote, buff, (int)strlen(buff)) + || !SITEread(Remote, buff)) + sysdie("cannot start list"); + if (buff[0] != NNTP_CLASS_OK) { + SITEquit(Remote); + die("protocol error from %s, got %s", Remote->Name, buff); + } + + /* Create a temporary file. */ + msgidfile = concatpath(innconf->pathtmp, "nntpgetXXXXXX"); + msgidfd = mkstemp(msgidfile); + if (msgidfd < 0) + sysdie("cannot create a temporary file"); + F = fopen(msgidfile, "w+"); + if (F == NULL) + sysdie("cannot open %s", msgidfile); + + /* Read and store the Message-ID list. */ + for ( ; ; ) { + if (!SITEread(Remote, buff)) { + syswarn("cannot read from %s", Remote->Name); + fclose(F); + SITEquit(Remote); + exit(1); + } + if (strcmp(buff, ".") == 0) + break; + if (Offer && HIShaveit(buff)) + continue; + if (fprintf(F, "%s\n", buff) == EOF || ferror(F)) { + syswarn("cannot write %s", msgidfile); + fclose(F); + SITEquit(Remote); + exit(1); + } + } + if (fflush(F) == EOF) { + syswarn("cannot flush %s", msgidfile); + fclose(F); + SITEquit(Remote); + exit(1); + } + fseeko(F, 0, SEEK_SET); + } + + if (Offer) { + /* Connect to the local server. */ + if ((Local = SITEconnect((char *)NULL)) == NULL) { + syswarn("cannot connect to local server"); + fclose(F); + exit(1); + } + } + + /* Loop through the list of Message-ID's. */ + while (fgets(mesgid, sizeof mesgid, F) != NULL) { + STATgot++; + if ((p = strchr(mesgid, '\n')) != NULL) + *p = '\0'; + + if (Offer) { + /* See if the local server wants it. */ + STAToffered++; + snprintf(buff, sizeof(buff), "ihave %s", mesgid); + if (!SITEwrite(Local, buff, (int)strlen(buff)) + || !SITEread(Local, buff)) { + syswarn("cannot offer %s", mesgid); + break; + } + if (atoi(buff) != NNTP_SENDIT_VAL) + continue; + } + + /* Try to get the article. */ + snprintf(buff, sizeof(buff), "article %s", mesgid); + if (!SITEwrite(Remote, buff, (int)strlen(buff)) + || !SITEread(Remote, buff)) { + syswarn("cannot get %s", mesgid); + printf("%s\n", mesgid); + break; + } + if (atoi(buff) != NNTP_ARTICLE_FOLLOWS_VAL) { + if (Offer) { + SITEwrite(Local, ".", 1); + if (!SITEread(Local, buff)) { + syswarn("no reply after %s", mesgid); + break; + } + } + continue; + } + + if (Verbose) + notice("%s...", mesgid); + + /* Read each line in the article and write it. */ + for (Error = false; ; ) { + if (!SITEread(Remote, buff)) { + syswarn("cannot read %s from %s", mesgid, Remote->Name); + Error = true; + break; + } + if (Offer) { + if (!SITEwrite(Local, buff, (int)strlen(buff))) { + syswarn("cannot send %s", mesgid); + Error = true; + break; + } + } + else + printf("%s\n", buff); + if (strcmp(buff, ".") == 0) + break; + } + if (Error) { + printf("%s\n", mesgid); + break; + } + STATsent++; + + /* How did the local server respond? */ + if (Offer) { + if (!SITEread(Local, buff)) { + syswarn("no reply after %s", mesgid); + printf("%s\n", mesgid); + break; + } + i = atoi(buff); + if (i == NNTP_TOOKIT_VAL) + continue; + if (i == NNTP_RESENDIT_VAL) { + printf("%s\n", mesgid); + break; + } + syswarn("%s to %s", buff, mesgid); + STATrejected++; + } + } + + /* Write rest of the list, close the input. */ + if (!feof(F)) + while (fgets(mesgid, sizeof mesgid, F) != NULL) { + if ((p = strchr(mesgid, '\n')) != NULL) + *p = '\0'; + printf("%s\n", mesgid); + STATgot++; + } + fclose(F); + + /* Remove our temp file. */ + if (msgidfile && unlink(msgidfile) < 0) + syswarn("cannot remove %s", msgidfile); + + /* All done. */ + SITEquit(Remote); + if (Offer) + SITEquit(Local); + + /* Update timestamp file? */ + if (Update) { + if ((F = fopen(Update, "w")) == NULL) + sysdie("cannot update %s", Update); + fprintf(F, "got %ld offered %ld sent %ld rejected %ld\n", + STATgot, STAToffered, STATsent, STATrejected); + if (ferror(F) || fclose(F) == EOF) + sysdie("cannot update %s", Update); + } + + exit(0); + /* NOTREACHED */ +} diff --git a/backends/nntpsend.in b/backends/nntpsend.in new file mode 100644 index 0000000..eb68718 --- /dev/null +++ b/backends/nntpsend.in @@ -0,0 +1,472 @@ +#! /bin/sh +# fixscript will replace this line with code to load innshellvars + +## $Revision: 5047 $ +## Send news via NNTP by running several innxmit processes in the background. +## Usage: +## nntpsend [-n][-p][-r][-s size][-S][-t timeout][-T limit][host fqdn]... +## -a Always have innxmit rewrite the batchfile +## -d debug mode, run innxmits with debug as well +## -D same as -d except innxmits are not debugged +## -p Run innxmit with -p to prune batch files +## -r innxmit, don't requeue on unexpected error code +## -s size limit the =n file to size bytes +## -c disable message-ID checking in streaming mode +## -t timeout innxmit timeout to make connection (def: 180) +## -T limit innxmit connection transmit time limit (def: forever) +## -P portnum port number to use +## -l innxmit, log rejected articles +## -N innxmit, disable streaming mode +## -n do not lock for nntpsend, do not sleep between sets +## -w delay wait delay seconds just before innxmit +## host fqdn send to host and qualified domain (def: nntpsend.ctl) +## If no "host fqdn" pairs appear on the command line, then ${CTLFILE} +## file is read. + +PROGNAME=`basename $0` +LOCK=${LOCKS}/LOCK.${PROGNAME} +CTLFILE=${PATHETC}/${PROGNAME}.ctl +LOG=${MOST_LOGS}/${PROGNAME}.log + +## Set defaults. +A_FLAG= +D_FLAG= +NO_LOG_FLAG= +P_FLAG= +R_FLAG= +S_FLAG= +C_FLAG= +L_FLAG= +S2_FLAG= +TRUNC_SIZE= +T_FLAG= +TIMELIMIT= +PP_FLAG= +NOLOCK= +W_SECONDS= + +## Parse JCL. +MORETODO=true +while ${MORETODO} ; do + case X"$1" in + X-a) + A_FLAG="-a" + ;; + X-d) + D_FLAG="-d" + NO_LOG_FLAG="true" + ;; + X-D) + NO_LOG_FLAG="true" + ;; + X-l) + L_FLAG="-l" + ;; + X-p) + P_FLAG="-p" + ;; + X-r) + R_FLAG="-r" + ;; + X-S) + S_FLAG="-S" + ;; + X-N) + S2_FLAG="-s" + ;; + X-c) + C_FLAG="-c" + ;; + X-s) + if [ -z "$2" ] ; then + echo "${PROGNAME}: option requires an argument -- s" 1>&2 + exit 1 + fi + TRUNC_SIZE="$2" + shift + ;; + X-s*) + TRUNC_SIZE="`echo $1 | ${SED} -e 's/-s//'`" + ;; + X-t) + if [ -z "$2" ] ; then + echo "${PROGNAME}: option requires an argument -- t" 1>&2 + exit 1 + fi + T_FLAG="-t$2" + shift + ;; + X-t*) + T_FLAG="$1" + ;; + X-P) + if [ -z "$2" ] ; then + echo "${PROGNAME}: option requires an argument -- P" 1>&2 + exit 1 + fi + PP_FLAG="-P$2" + shift + ;; + X-P*) + PP_FLAG="$1" + ;; + X-T) + if [ -z "$2" ] ; then + echo "${PROGNAME}: option requires an argument -- T" 1>&2 + exit 1 + fi + TIMELIMIT="-T$2" + shift + ;; + X-T*) + TIMELIMIT="$1" + ;; + X-n) + NOLOCK=true + ;; + X-w) + if [ -z "$2" ] ; then + echo "${PROGNAME}: option requires an argument -- w" 1>&2 + exit 1 + fi + W_SECONDS="$2" + shift + ;; + X--) + shift + MORETODO=false + ;; + X-*) + echo "${PROGNAME}: illegal option -- $1" 1>&2 + exit 1 + ;; + *) + MORETODO=false + ;; + esac + ${MORETODO} && shift +done + +## grab the lock if not -n +NNTPLOCK=${LOCKS}/LOCK.nntpsend +if [ -z "${NOLOCK}" ]; then + shlock -p $$ -f ${NNTPLOCK} || { + # nothing to do + exit 0 + } +fi + +## Parse arguments; host/fqdn pairs. +INPUT=${TMPDIR}/nntpsend$$ +cp /dev/null ${INPUT} +while [ $# -gt 0 ]; do + if [ $# -lt 2 ]; then + echo "${PROGNAME}: Bad host/fqdn pair" 1>&2 + rm -f ${NNTPLOCK} + exit 1 + fi + echo "$1 $2" >>${INPUT} + shift + shift +done + +## If nothing specified on the command line, read the control file. +if [ ! -s ${INPUT} ] ; then + if [ ! -r ${CTLFILE} ]; then + echo "${PROGNAME}: cannot read ${CTLFILE}" + rm -f ${NNTPLOCK} + exit 1 + fi + ${SED} -e 's/#.*//' -e '/^$/d' -e 's/::\([^:]*\)$/:max:\1/' \ + -e 's/:/ /g' <${CTLFILE} >${INPUT} +fi + +## Go to where the action is. +if [ ! -d ${BATCH} ]; then + echo "${PROGNAME}: directory ${BATCH} not found" 1>&2 + rm -f ${NNTPLOCK} + exit 1 +fi +cd ${BATCH} + +## Set up log file. +umask 002 +if [ -z "${NO_LOG_FLAG}" ]; then + test ! -f ${LOG} && touch ${LOG} + chmod 0660 ${LOG} + exec >>${LOG} 2>&1 +fi +PARENTPID=$$ +echo "${PROGNAME}: [${PARENTPID}] start" + +## Set up environment. +export BATCH PROGNAME PARENTPID INNFLAGS + +## Loop over all sites. +cat ${INPUT} | while read SITE HOST SIZE_ARG FLAGS; do + ## Parse the input parameters. + if [ -z "${SITE}" -o -z "${HOST}" ] ; then + echo "Ignoring bad line: ${SITE} ${HOST} ${SIZE_ARG} ${FLAGS}" 1>&2 + continue + fi + + ## give up early if we cannot even lock it + ## + ## NOTE: This lock is not nntpsend's lock but rather the + ## lock that the parent shell of innxmit will use. + ## Later on the child will take the lock from us. + ## + LOCK="${LOCKS}/LOCK.${SITE}" + shlock -p $$ -f "${LOCK}" || continue + + ## Compute the specific parameters for this site. + test "${SIZE_ARG}" = "max" && SIZE_ARG= + if [ -n "${TRUNC_SIZE}" ]; then + SIZE_ARG="${TRUNC_SIZE}" + fi + ## Parse the SIZE_ARG for either MaxSize-TruncSize or TruncSize + case "${SIZE_ARG}" in + *-*) MAXSIZE="`echo ${SIZE_ARG} | ${SED} -e 's/-.*//'`"; + SIZE="`echo ${SIZE_ARG} | ${SED} -e 's/^.*-//'`" ;; + *) MAXSIZE="${SIZE_ARG}"; + SIZE="${SIZE_ARG}" ;; + esac + D_PARAM= + R_PARAM= + S_PARAM= + S2_PARAM= + C_PARAM= + PP_PARAM= + L_PARAM= + TIMEOUT_PARAM= + TIMELIMIT_PARAM= + if [ -z "${FLAGS}" ]; then + MORETODO=false + else + MORETODO=true + set -- ${FLAGS} + fi + while ${MORETODO} ; do + case "X$1" in + X-a) + ;; + X-d) + D_PARAM="-d" + ;; + X-c) + C_PARAM="-c" + ;; + X-p) + P_PARAM="-p" + ;; + X-r) + R_PARAM="-r" + ;; + X-S) + S_PARAM="-S" + ;; + X-s) + S2_PARAM="-s" + ;; + X-l) + L_PARAM="-l" + ;; + X-t) + if [ -z "$2" ] ; then + echo "${PROGNAME}: option requires an argument -- t" 1>&2 + rm -f "${NNTPLOCK}" "${LOCK}" + exit 1 + fi + TIMEOUT_PARAM="-t$2" + shift + ;; + X-t*) + TIMEOUT_PARAM="$1" + ;; + X-P) + if [ -z "$2" ] ; then + echo "${PROGNAME}: option requires an argument -- P" 1>&2 + rm -f "${NNTPLOCK}" "${LOCK}" + exit 1 + fi + PP_PARAM="-P$2" + shift + ;; + X-P*) + PP_PARAM="$1" + ;; + X-T) + if [ -z "$2" ] ; then + echo "${PROGNAME}: option requires an argument -- T" 1>&2 + rm -f "${NNTPLOCK}" "${LOCK}" + exit 1 + fi + TIMELIMIT_PARAM="-T$2" + shift + ;; + X-T*) + TIMELIMIT_PARAM="$1" + ;; + X-w) + if [ -z "$2" ] ; then + echo "${PROGNAME}: option requires an argument -- w" 1>&2 + rm -f "${NNTPLOCK}" "${LOCK}" + exit 1 + fi + W_SECONDS="$2" + shift + ;; + *) + MORETODO=false + ;; + esac + ${MORETODO} && shift + done + if [ -z "${SIZE}" -o -n "${A_FLAG}" ]; then + # rewrite batch file if we do not have a size limit + INNFLAGS="-a" + else + # we have a size limit, let shrinkfile rewrite the file + INNFLAGS= + fi + if [ -n "${D_FLAG}" ]; then + INNFLAGS="${INNFLAGS} ${D_FLAG}" + else + test -n "${D_PARAM}" && INNFLAGS="${INNFLAGS} ${D_PARAM}" + fi + if [ -n "${C_FLAG}" ]; then + INNFLAGS="${INNFLAGS} ${C_FLAG}" + else + test -n "${C_PARAM}" && INNFLAGS="${INNFLAGS} ${C_PARAM}" + fi + if [ -n "${P_FLAG}" ]; then + INNFLAGS="${INNFLAGS} ${P_FLAG}" + else + test -n "${P_PARAM}" && INNFLAGS="${INNFLAGS} ${P_PARAM}" + fi + if [ -n "${L_FLAG}" ]; then + INNFLAGS="${INNFLAGS} ${L_FLAG}" + else + test -n "${L_PARAM}" && INNFLAGS="${INNFLAGS} ${L_PARAM}" + fi + if [ -n "${R_FLAG}" ]; then + INNFLAGS="${INNFLAGS} ${R_FLAG}" + else + test -n "${R_PARAM}" && INNFLAGS="${INNFLAGS} ${R_PARAM}" + fi + if [ -n "${S_FLAG}" ]; then + INNFLAGS="${INNFLAGS} ${S_FLAG}" + else + test -n "${S_PARAM}" && INNFLAGS="${INNFLAGS} ${S_PARAM}" + fi + if [ -n "${S2_FLAG}" ]; then + INNFLAGS="${INNFLAGS} ${S2_FLAG}" + else + test -n "${S2_PARAM}" && INNFLAGS="${INNFLAGS} ${S2_PARAM}" + fi + if [ -n "${T_FLAG}" ]; then + INNFLAGS="${INNFLAGS} ${T_FLAG}" + else + test -n "${TIMEOUT_PARAM}" && INNFLAGS="${INNFLAGS} ${TIMEOUT_PARAM}" + fi + if [ -n "${PP_FLAG}" ]; then + INNFLAGS="${INNFLAGS} ${PP_FLAG}" + else + test -n "${PP_PARAM}" && INNFLAGS="${INNFLAGS} ${PP_PARAM}" + fi + if [ -n "${TIMELIMIT}" ]; then + INNFLAGS="${INNFLAGS} ${TIMELIMIT}" + else + test -n "${TIMELIMIT_PARAM}" \ + && INNFLAGS="${INNFLAGS} ${TIMELIMIT_PARAM}" + fi + + ## Flush the buffers for the site now, rather than in the child. + ## This helps pace the number of ctlinnd commands because the + ## nntpsend process does not proceed until the site flush has + ## been completed. + ## + # carry old unfinished work over to this task + BATCHFILE="${SITE}=n" + if [ -f "${SITE}.work" ] ; then + cat ${SITE}.work >>"${BATCHFILE}" + rm -f "${SITE}.work" + fi + # form BATCHFILE to hold the work for this site + if [ -f "${SITE}" ]; then + mv "${SITE}" "${SITE}.work" + if ctlinnd -s -t30 flush ${SITE} ; then + cat ${SITE}.work >>"${BATCHFILE}" + rm -f ${SITE}.work + else + # flush failed, continue if we have any batchfile to work on + echo "${PROGNAME}: bad flush for ${HOST} via ${SITE}" + if [ -f "${BATCHFILE}" ]; then + echo "${PROGNAME}: trying ${HOST} via ${SITE} anyway" + else + echo "${PROGNAME}: skipping ${HOST} via ${SITE}" + rm -f ${LOCK} + continue + fi + fi + else + # nothing to work on, so flush and move on + ctlinnd -s -t30 flush ${SITE} + echo "${PROGNAME}: file ${BATCH}/${SITE} for ${HOST} not found" + if [ -f "${BATCHFILE}" ]; then + echo "${PROGNAME}: trying ${HOST} via ${SITE} anyway" + else + echo "${PROGNAME}: skipping ${HOST} via ${SITE}" + rm -f ${LOCK} + continue + fi + fi + + ## Start sending this site in the background. + export MAXSIZE SITE HOST PROGNAME PARENTPID SIZE TMPDIR LOCK BATCHFILE W_SECONDS + sh -c ' + # grab the lock from the parent + # + # This is safe because only the parent will have locked + # the site. We break the lock and reclaim it. + rm -f ${LOCK} + trap "rm -f ${LOCK} ; exit 1" 1 2 3 15 + shlock -p $$ -f ${LOCK} || { + WHY="`cat ${LOCK}`" + echo "${PROGNAME}: [${PARENTPID}:$$] ${SITE} locked ${WHY} `date`" + exit + } + # process the site BATCHFILE + if [ -f "${BATCHFILE}" ]; then + test -n "${SIZE}" && shrinkfile -m${MAXSIZE} -s${SIZE} -v ${BATCHFILE} + if [ -s ${BATCHFILE} ] ; then + if [ -n "${W_SECONDS}" ] ; then + echo "${PROGNAME}: [${PARENTPID}:$$] sleeping ${W_SECONDS} seconds before ${SITE}" + sleep "${W_SECONDS}" + fi + echo "${PROGNAME}: [${PARENTPID}:$$] begin ${SITE} `date`" + echo "${PROGNAME}: [${PARENTPID}:$$] innxmit ${INNFLAGS} ${HOST} ..." + eval innxmit ${INNFLAGS} ${HOST} ${BATCH}/${BATCHFILE} + echo "${PROGNAME}: [${PARENTPID}:$$] end ${SITE} `date`" + else + rm -f ${BATCHFILE} + fi + else + echo "${PROGNAME}: file ${BATCH}/${BATCHFILE} for ${HOST} not found" + fi + rm -f ${LOCK} + ' & +done + +## release the nntpsend lock and clean up before we wait on child processes +if [ -z "${NOLOCK}" ]; then + rm -f ${NNTPLOCK} +fi +rm -f ${INPUT} + +## wait for child processes to finish +wait + +## all done +echo "${PROGNAME}: [${PARENTPID}] stop" +exit 0 diff --git a/backends/overchan.c b/backends/overchan.c new file mode 100644 index 0000000..30bb028 --- /dev/null +++ b/backends/overchan.c @@ -0,0 +1,142 @@ +/* $Id: overchan.c 6135 2003-01-19 01:15:40Z rra $ +** +** Parse input to add to news overview database. +*/ + +#include "config.h" +#include "clibrary.h" +#include "portable/time.h" +#include +#include +#include + +#include "inn/innconf.h" +#include "inn/messages.h" +#include "inn/qio.h" +#include "libinn.h" +#include "ov.h" +#include "paths.h" + +unsigned int NumArts; +unsigned int StartTime; +unsigned int TotOvTime; + +/* + * Timer function (lifted from innd/timer.c). + * This function is designed to report the number of milliseconds since + * the first invocation. I wanted better resolution than time(), and + * something easier to work with than gettimeofday()'s struct timeval's. + */ + +static unsigned gettime(void) +{ + static int init = 0; + static struct timeval start_tv; + struct timeval tv; + + if (! init) { + gettimeofday(&start_tv, NULL); + init++; + } + gettimeofday(&tv, NULL); + return((tv.tv_sec - start_tv.tv_sec) * 1000 + (tv.tv_usec - start_tv.tv_usec) / 1000); +} + +/* +** Process the input. Data comes from innd in the form: +** @token@ data +*/ + +#define TEXT_TOKEN_LEN (2*sizeof(TOKEN)+2) +static void ProcessIncoming(QIOSTATE *qp) +{ + char *Data; + char *p; + TOKEN token; + unsigned int starttime, endtime; + time_t Time, Expires; + + for ( ; ; ) { + /* Read the first line of data. */ + if ((Data = QIOread(qp)) == NULL) { + if (QIOtoolong(qp)) { + warn("line too long"); + continue; + } + break; + } + + if (Data[0] != '@' || strlen(Data) < TEXT_TOKEN_LEN+2 + || Data[TEXT_TOKEN_LEN-1] != '@' || Data[TEXT_TOKEN_LEN] != ' ') { + warn("malformed token %s", Data); + continue; + } + token = TextToToken(Data); + Data += TEXT_TOKEN_LEN+1; /* skip over token and space */ + for (p = Data; !ISWHITE(*p) ;p++) ; + *p++ = '\0'; + Time = (time_t)atol(Data); + for (Data = p; !ISWHITE(*p) ;p++) ; + *p++ = '\0'; + Expires = (time_t)atol(Data); + Data = p; + NumArts++; + starttime = gettime(); + if (OVadd(token, Data, strlen(Data), Time, Expires) == OVADDFAILED) + syswarn("cannot write overview %s", Data); + endtime = gettime(); + TotOvTime += endtime - starttime; + } + QIOclose(qp); +} + + +int main(int ac, char *av[]) +{ + QIOSTATE *qp; + unsigned int now; + + /* First thing, set up our identity. */ + message_program_name = "overchan"; + + /* Log warnings and fatal errors to syslog unless we were given command + line arguments, since we're probably running under innd. */ + if (ac == 0) { + openlog("overchan", L_OPENLOG_FLAGS | LOG_PID, LOG_INN_PROG); + message_handlers_warn(1, message_log_syslog_err); + message_handlers_die(1, message_log_syslog_err); + message_handlers_notice(1, message_log_syslog_notice); + } + + /* Set defaults. */ + if (!innconf_read(NULL)) + exit(1); + umask(NEWSUMASK); + if (innconf->enableoverview && !innconf->useoverchan) + warn("overchan is running while innd is creating overview data (you" + " can ignore this message if you are running makehistory -F)"); + + ac -= 1; + av += 1; + + if (!OVopen(OV_WRITE)) + die("cannot open overview"); + + StartTime = gettime(); + if (ac == 0) + ProcessIncoming(QIOfdopen(STDIN_FILENO)); + else { + for ( ; *av; av++) + if (strcmp(*av, "-") == 0) + ProcessIncoming(QIOfdopen(STDIN_FILENO)); + else if ((qp = QIOopen(*av)) == NULL) + syswarn("cannot open %s", *av); + else + ProcessIncoming(qp); + } + OVclose(); + now = gettime(); + notice("timings %u arts %u of %u ms", NumArts, TotOvTime, now - StartTime); + exit(0); + /* NOTREACHED */ +} diff --git a/backends/send-ihave.in b/backends/send-ihave.in new file mode 100644 index 0000000..f1ba03c --- /dev/null +++ b/backends/send-ihave.in @@ -0,0 +1,95 @@ +#! /bin/sh +# fixscript will replace this line with code to load innshellvars + +## $Revision: 2674 $ +## SH script to send IHAVE batches out. + +PROGNAME=`basename $0` +LOG=${MOST_LOGS}/${PROGNAME}.log + +## How many Message-ID's per message. +PERMESSAGE=1000 + +## Go to where the action is, start logging +cd $BATCH +umask 002 +DEBUG="" +if [ "X$1" = X-d ] ; then + DEBUG="-d" + shift +else + test ! -f ${LOG} && touch ${LOG} + chmod 0660 ${LOG} + exec >>${LOG} 2>&1 +fi + +echo "${PROGNAME}: [$$] begin `date`" + +## List of sitename:hostname pairs to send to +if [ -n "$1" ] ; then + LIST="$*" +else + echo "${PROGNAME}: [$$] no sites specified" >&2 + exit 1 +fi + +## Do the work... +for SITE in ${LIST}; do + case $SITE in + *:*) + HOST=`expr $SITE : '.*:\(.*\)'` + SITE=`expr $SITE : '\(.*\):.*'` + ;; + *) + HOST=$SITE + ;; + esac + BATCHFILE=${SITE}.ihave.batch + LOCK=${LOCKS}/LOCK.${SITE}.ihave + trap 'rm -f ${LOCK} ; exit 1' 1 2 3 15 + shlock -p $$ -f ${LOCK} || { + echo "${PROGNAME}: [$$] ${SITE}.ihave locked by `cat ${LOCK}`" + continue + } + + ## See if any data is ready for host. + if [ -f ${SITE}.ihave.work ] ; then + cat ${SITE}.ihave.work >>${BATCHFILE} + rm -f ${SITE}.ihave.work + fi + if [ ! -f ${SITE}.ihave -o ! -s ${SITE}.ihave ] ; then + if [ ! -f ${BATCHFILE} -o ! -s ${BATCHFILE} ] ; then + rm -f ${LOCK} + continue + fi + fi + mv ${SITE}.ihave ${SITE}.ihave.work + ctlinnd -s -t30 flush ${SITE}.ihave || continue + cat ${SITE}.ihave.work >>${BATCHFILE} + rm -f ${SITE}.ihave.work + if [ ! -s ${BATCHFILE} ] ; then + echo "${PROGNAME}: [$$] no articles for ${SITE}.ihave" + rm -f ${BATCHFILE} + continue + fi + + echo "${PROGNAME}: [$$] begin ${SITE}.ihave" + + ## Write out the batchfile as a control message, in clumps. + export SITE PERMESSAGE BATCHFILE + while test -s ${BATCHFILE} ; do + ( + echo Newsgroups: to.${SITE} + echo Control: ihave `innconfval pathhost` + echo Subject: cmsg ihave `innconfval pathhost` + echo '' + ${SED} -e ${PERMESSAGE}q <${BATCHFILE} + ) | ${INEWS} -h + ${SED} -e "1,${PERMESSAGE}d" <${BATCHFILE} >${BATCHFILE}.tmp + mv ${BATCHFILE}.tmp ${BATCHFILE} + done + echo "${PROGNAME}: [$$] end ${SITE}.ihave" + rm -f ${LOCK} +done + +echo "${PROGNAME}: [$$] end `date`" diff --git a/backends/send-nntp.in b/backends/send-nntp.in new file mode 100644 index 0000000..018eb6d --- /dev/null +++ b/backends/send-nntp.in @@ -0,0 +1,88 @@ +#! /bin/sh +# fixscript will replace this line with code to load innshellvars + +## $Revision: 4115 $ +## SH script to send NNTP news out. + +PROGNAME=`basename $0` +LOG=${MOST_LOGS}/${PROGNAME}.log + +## Go to where the action is, start logging +cd $BATCH +umask 002 +DEBUG="" +if [ "X$1" = X-d ] ; then + DEBUG="-d" + shift +else + test ! -f ${LOG} && touch ${LOG} + chmod 0660 ${LOG} + exec >>${LOG} 2>&1 +fi + +echo "${PROGNAME}: [$$] begin `date`" + +## List of sitename:hostname pairs to send to +if [ -n "$1" ] ; then + LIST="$*" +else + echo "${PROGNAME}: [$$] no sites specified" >&2 + exit 1 +fi + +## Do the work... +for SITE in ${LIST}; do + case $SITE in + *:*) + HOST=`expr $SITE : '.*:\(.*\)'` + SITE=`expr $SITE : '\(.*\):.*'` + ;; + *) + HOST=$SITE + ;; + esac + case $HOST in + *@*) + PORT=`expr $HOST : '\(.*\)@.*'` + HOST=`expr $HOST : '.*@\(.*\)'` + ;; + *) + PORT=119 + ;; + esac + BATCHFILE=${SITE}.nntp + LOCK=${LOCKS}/LOCK.${SITE} + trap 'rm -f ${LOCK} ; exit 1' 1 2 3 15 + shlock -p $$ -f ${LOCK} || { + echo "${PROGNAME}: [$$] ${SITE} locked by `cat ${LOCK}`" + continue + } + + ## See if any data is ready for host. + if [ -f ${SITE}.work ] ; then + cat ${SITE}.work >>${BATCHFILE} + rm -f ${SITE}.work + fi + if [ ! -f ${SITE} -o ! -s ${SITE} ] ; then + if [ ! -f ${BATCHFILE} -o ! -s ${BATCHFILE} ] ; then + rm -f ${LOCK} + continue + fi + fi + mv ${SITE} ${SITE}.work + ctlinnd -s -t30 flush ${SITE} || continue + cat ${SITE}.work >>${BATCHFILE} + rm -f ${SITE}.work + if [ ! -s ${BATCHFILE} ] ; then + echo "${PROGNAME}: [$$] no articles for ${SITE}" + rm -f ${BATCHFILE} + continue + fi + + echo "${PROGNAME}: [$$] begin ${SITE}" + time innxmit ${DEBUG} -P ${PORT} ${HOST} ${BATCH}/${BATCHFILE} + echo "${PROGNAME}: [$$] end ${SITE}" + rm -f ${LOCK} +done + +echo "${PROGNAME}: [$$] end `date`" diff --git a/backends/send-uucp.in b/backends/send-uucp.in new file mode 100644 index 0000000..8ba2937 --- /dev/null +++ b/backends/send-uucp.in @@ -0,0 +1,382 @@ +#!/usr/bin/perl -w +# fixscript will replace this line with code to load innshellvars + +############################################################################## +# send-uucp.pl create news batches from the outgoing files +# +# Author: Edvard Tuinder +# +# Copyright (C) 1994 Edvard Tuinder - ELM Consultancy B.V. +# Copyright (C) 1995-1997 Miquel van Smoorenburg - Cistron Internet Services +# +# Copyright (C) 2003 Marco d'Itri +# Nearly rewritten. Added syslog support, real errors checking and more. +# +# This program is free software; you can redistribute it and/or modify it +# under the terms of the GNU General Public License as published by the Free +# Software Foundation; either version 2 of the License, or (at your option) +# any later version. +############################################################################## + +use strict; +use Sys::Syslog; + +# for compatibility with earlier versions of INN +$inn::pathetc ||= '/etc/news'; +$inn::syslog_facility ||= 'news'; +$inn::uux ||= 'uux'; + +# some default values +my $MAXSIZE = 500000; +my $MAXJOBS = 200; + +my %UNBATCHER = ( + compress => 'cunbatch', + bzip2 => 'bunbatch', + gzip => 'gunbatch', +); + +my $UUX_FLAGS = '- -z -r -gd'; +my $BATCHER_FLAGS = ''; + +############################################################################## +my $config_file = $inn::pathetc . '/send-uucp.cf'; +my $lockfile = $inn::locks . '/LOCK.send-uucp'; + +openlog('send-uucp', 'pid', $inn::syslog_facility); + +my @sitelist; +if (@ARGV) { + foreach my $site (@ARGV) { + my @cfg = read_cf($config_file, $site); + if (not @cfg) { + logmsg("site $site not found in the configuration", 'err'); + next; + } + push @sitelist, @cfg; + } +} else { + @sitelist = read_cf($config_file, undef); +} + +if (not @sitelist) { + logmsg('nothing to do', 'debug'); + exit 0; +} + +chdir $inn::batch or logdie("Can't access $inn::batch: $!", 'crit'); + +shlock($lockfile); + +run_site($_) foreach @sitelist; +unlink $lockfile; +exit 0; + +# lint food +$inn::compress.$inn::locks.$inn::syslog_facility.$inn::have_uustat = 0 if 0; + +############################################################################## +sub read_cf { + my ($conf_file, $site_wanted) = @_; + + my $hour = (localtime time)[2]; + + my @sites; + open(CF, $conf_file) or logdie("cannot open $conf_file: $!", 'crit'); + while () { + chop; + s/\s*\#.*$//; + next if /^$/; + + my ($sitespec, $compress, $size, $time) = split(/\s+/); + next if not $sitespec; + + my ($site, $host, $funnel) = split(/:/, $sitespec); + $host = $site if not $host; + $funnel = $site if not $funnel; + + $compress =~ s/_/ /g if $compress; + + if ($site_wanted) { + if ($site eq $site_wanted) { + push @sites, [$site, $host, $funnel, $compress, $size]; + last; + } + next; + } + + if ($time) { + foreach my $time (split(/,/, $time)) { + next if $time != $hour; + push @sites, [$site, $host, $funnel, $compress, $size]; + } + } else { + push @sites, [$site, $host, $funnel, $compress, $size]; + } + } + close CF; + return @sites; +} + +############################################################################## +# count number of jobs in the UUCP queue for a given site +sub count_jobs { + my ($site) = @_; + + return 0 if not $inn::have_uustat; + open(JOBS, "uustat -s $site 2> /dev/null |") or logdie("cannot fork: $!"); + my $count = grep(/ Executing rnews /, ); + close JOBS; # ignore errors, uustat may fail + return $count; +} + +# select the rnews label appropriate for the compressor program used +sub unbatcher { + my ($compressor) = @_; + + $compressor =~ s%.*/%%; # Do not keep the complete path. + $compressor =~ s% .*%%; # Do not keep the optional parameters. + return $UNBATCHER{$compressor} || 'cunbatch'; +} + +############################################################################## +# batch articles for one site +sub run_site { + my ($cfg) = @_; + my ($site, $host, $funnel, $compress, $size) = @$cfg; + + logmsg("checking site $site", 'debug'); + my $maxjobs = ''; + if ($MAXJOBS) { + my $jobs = count_jobs($site); + if ($jobs >= $MAXJOBS) { + logmsg("too many jobs queued for $site"); + return; + } + $maxjobs = '-N ' . ($MAXJOBS - $jobs); + } + + $compress ||= $inn::compress; + $size ||= $MAXSIZE; + + # if exists a .work temp file left by a previous invocation, rename + # it to .work.tmp, we'll append it to the current batch file once it + # has been renamed and flushed. + if (-f "$site.work") { + rename("$site.work", "$site.work.tmp") + or logdie("cannot rename $site.work: $!", 'crit'); + } + + if (not -f $site and not -f "$site.work.tmp") { + logmsg("no batch file for site $site", 'err'); + return; + } + + rename($site, "$site.work") or logdie("cannot rename $site: $!", 'crit'); + logmsg("Flushing $funnel for site $site", 'debug'); + ctlinnd('-t120', 'flush', $funnel); + + # append the old .work temp file to the current batch file if needed + if (-f "$site.work.tmp") { + my $err = ''; + open(OUT, ">>$site.work") + or logdie("cannot open $site.work: $!", 'crit'); + open(IN, "$site.work.tmp") + or logdie("cannot open $site.work.tmp: $!", 'crit'); + print OUT while ; + close IN; + close OUT or logdie("cannot close $site.work: $!");; + unlink "$site.work.tmp" + or logmsg("cannot delete $site.work.tmp: $!", 'err'); + } + + if (not -s "$site.work") { + logmsg("no articles for $site", 'debug'); + unlink "$site.work" or logmsg("cannot delete $site.work: $!", 'err'); + } else { + if ($compress eq 'none') { + system "batcher -b $size $maxjobs $BATCHER_FLAGS " + . "-p\"$inn::uux $UUX_FLAGS %s!rnews\" $host $site.work"; + } else { + system "batcher -b $size $maxjobs $BATCHER_FLAGS " + . "-p\"{ echo '#! " . unbatcher($compress) + . "' ; exec $compress; } | " + . "$inn::uux $UUX_FLAGS %s!rnews\" $host $site.work"; + } + logmsg("batched articles for $site", 'debug'); + } +} + +############################################################################## +sub logmsg { + my ($msg, $lvl) = @_; + + syslog($lvl || 'notice', '%s', $msg); +} + +sub logdie { + my ($msg, $lvl) = @_; + + logmsg($msg, $lvl || 'err'); + unlink $lockfile; + exit 1; +} + +sub ctlinnd { + my ($cmd, @args) = @_; + + my $st = system("$inn::newsbin/ctlinnd", '-s', $cmd, @args); + logdie('Cannot run ctlinnd: ' . $!) if $st == -1; + logdie('ctlinnd returned status ' . ($st & 255)) if $st > 0; +} + +sub shlock { + my $lockfile = shift; + + my $locktry = 0; + while ($locktry < 60) { + if (system("$inn::newsbin/shlock", '-p', $$, '-f', $lockfile) == 0) { + return 1; + } + $locktry++; + sleep 2; + } + + my $lockreason; + if (open(LOCKFILE, $lockfile)) { + $lockreason = 'held by ' . ( || '?'); + close LOCKFILE; + } else { + $lockreason = $!; + } + logdie("Cannot get lock $lockfile: $lockreason"); + return undef; +} + +__END__ + +=head1 NAME + +send-uucp - Send Usenet articles via UUCP + +=head1 SYNOPSIS + +B [I ...] + +=head1 DESCRIPTION + +The B program processes batch files written by innd(8) to send +Usenet articles to UUCP sites. It reads a configuration file to control how +it behaves with various sites. Normally, it's run periodically out of cron +to put together batches and send them to remote UUCP sites. + +=head1 OPTIONS + +Any arguments provided to the program are interpreted as a list of sites +specfied in F for which batches should be generated. If no +arguments are supplied then batches will be generated for all sites listed +in that configuration file. + +=head1 CONFIGURATION + +The sites to which articles are to be sent must be configured in the +configuration file F. Each site is specified with a line of +the form: + + site[:host[:funnel]] [compressor [maxsize [batchtime]]] + +=over 4 + +=item I + +The news site name being configured. This must match a site name +from newsfeeds(5). + +=item I + +The UUCP host name to which batches should be sent for this site. +If omitted, the news site name will be used as the UUCP host name. + +=item I + +In the case of a site configured as a funnel, B needs to flush +the channel (or exploder) being used as the target of the funnel instead of +flushing the site. This is the way to tell B the name of the +channel or exploder to flush for this site. If not specified, default to +flushing the site. + +=item I + +The compression method to use for batches. This should be one of compress, +gzip or none. Arguments for the compression command may be specified by +using C<_> instead of spaces. For example, C. The default value is +C. + +=item I + +The maximum size of a single batch before compression. The default value is +500,000 bytes. + +=item I + +A comma separated list of hours during which batches should be generated for +a given site. When B runs, a site will only be processed if the +current hour matches one of the hours in I. The default is no +limitation on when to generate batches. + +=back + +Fields are seperated by spaces and only the site name needs to be specified, +with defaults being used for unspecified values. If the first character on +a line is a C<#> then the rest of the line is ignored. + +=head1 EXAMPLE + +Here is an example send-uucp.cf configuration file: + + zoetermeer gzip 1048576 5,18,22 + hoofddorp gzip 1048576 5,18,22 + pa3ebv gzip 1048576 5,18,22 + drinkel gzip 1048576 5,6,18,20,22,0,2 + manhole compress 1048576 5,18,22 + owl compress 1048576 + able + pern::MYFUNNEL! + +This defines eight UUCP sites. The first four use gzip compression and the +last three use compress. The first six use a batch size of 1MB, and the +last site (able) uses the default of 500,000 bytes. The zoetermeer, +hoofddorp, pa3ebv, and manhole sites will only have batches generated for +them during the hours of 05:00, 18:00, and 22:00, and the drinkel site will +only have batches generated during those hours and 20:00, 00:00, and 02:00. +There are no restrictions on when batches will be generated for owl or able. + +The pern site is configured as a funnel into C. B will +issue C instead of C. + +=head1 FILES + +=over 4 + +=item I/send-uucp.cf + +Configuration file specifying a list of sites to be processed. + +=back + +=head1 NOTES + +The usual flags used for a UUCP feed in the I file are C. + +=head1 SEE ALSO + +innd(8), newsfeeds(5), uucp(8) + +=head1 AUTHOR + +This program was originally written by Edvard Tuinder and then +maintained and extended by Miquel van Smoorenburg . +Marco d'Itri cleaned up the code for inclusion in INN. This +manual page was written by Mark Brown . + +=cut diff --git a/backends/sendinpaths.in b/backends/sendinpaths.in new file mode 100644 index 0000000..aedf451 --- /dev/null +++ b/backends/sendinpaths.in @@ -0,0 +1,44 @@ +#!/bin/sh +# fixscript will replace this line with code to load innshellvars +# +# Submit path statistics based on ninpaths +# $Id: sendinpaths.in 5854 2002-11-25 17:53:06Z rra $ + +# Assuming the ninpaths dump files are in ${MOST_LOGS}/path/inpaths.%d + +cd ${MOST_LOGS}/path +ME=`${NEWSBIN}/innconfval pathhost` +report=30 +keep=14 +TMP="" +defaddr="pathsurvey@top1000.org top1000@anthologeek.net" + +# Renice to give other processes priority, since this isn't too important. +renice 20 -p $$ > /dev/null + +# Make report from (up to) $report days of dumps +LOGS=`find . -name 'inpaths.*' ! -size 0 -mtime -$report -print` +if [ -z "$LOGS" ] ; then + echo "No data has been collected this month!" + exit 1 +fi + +# for check dumps +for i in $LOGS +do + ninpaths -u $i -r $ME > /dev/null 2>&1 + if test $? -eq 0; then : + TMP="$TMP -u $i" + fi +done + +if [ "$1" = "-n" ] ; then + ninpaths $TMP -r $ME +else + ninpaths $TMP -r $ME |\ + $MAILCMD -s "inpaths $ME" ${1:-$defaddr} + # remove dumps older than $keep days + find . -name 'inpaths.*' -mtime +$keep -exec rm '{}' \; +fi + +exit 0 diff --git a/backends/sendxbatches.in b/backends/sendxbatches.in new file mode 100644 index 0000000..ee8387b --- /dev/null +++ b/backends/sendxbatches.in @@ -0,0 +1,39 @@ +#! /bin/sh +# fixscript will replace this line with code to load innshellvars + +# $Id: sendxbatches.in 2674 1999-11-15 06:28:29Z rra $ +# By petri@ibr.cs.tu-bs.de with mods by libove@jerry.alf.dec.com +# +# Script to send xbatches for a site, wrapped around innxbatch +# Invocation: sendxbatches ... +# +## TODO: - we should check the amount of queued batches for the site, +## to prevent disk overflow due to unreachable sites. + +if [ $# -lt 3 ] +then + echo "usage: $0 " + exit 1 +fi + +LOCK=${LOCKS}/LOCK.sendxbatches +shlock -p $$ -f ${LOCK} +if [ $? -ne 0 ] +then + echo Locked by `cat ${LOCK}` + exit 1 +fi + +trap 'rm -f ${LOCK} ; exit 1' 1 2 3 15 +site="$1" +host="$2" +shift; shift + +ctlinnd -s flush "$site" +if [ $? -ne 0 ] +then + echo "ctlinnd flush $site failed." + exit 1 +fi +sleep 5 +$NEWSBIN/innxbatch -D -v "$host" $* diff --git a/backends/shlock.c b/backends/shlock.c new file mode 100644 index 0000000..bb4c503 --- /dev/null +++ b/backends/shlock.c @@ -0,0 +1,204 @@ +/* $Id: shlock.c 6124 2003-01-14 06:03:29Z rra $ +** +** Produce reliable locks for shell scripts, by Peter Honeyman as told +** to Rich $alz. +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include +#include +#include + +#include "inn/messages.h" + + +static bool BinaryLock; + + +/* +** See if the process named in an existing lock still exists by +** sending it a null signal. +*/ +static bool +ValidLock(char *name, bool JustChecking) +{ + int fd; + int i; + pid_t pid; + char buff[BUFSIZ]; + + /* Open the file. */ + if ((fd = open(name, O_RDONLY)) < 0) { + if (JustChecking) + return false; + syswarn("cannot open %s", name); + return true; + } + + /* Read the PID that is written there. */ + if (BinaryLock) { + if (read(fd, (char *)&pid, sizeof pid) != sizeof pid) { + close(fd); + return false; + } + } + else { + if ((i = read(fd, buff, sizeof buff - 1)) <= 0) { + close(fd); + return false; + } + buff[i] = '\0'; + pid = (pid_t) atol(buff); + } + close(fd); + if (pid <= 0) + return false; + + /* Send the signal. */ + if (kill(pid, 0) < 0 && errno == ESRCH) + return false; + + /* Either the kill worked, or we're optimistic about the error code. */ + return true; +} + + +/* +** Unlink a file, print a message on error, and exit. +*/ +static void +UnlinkAndExit(char *name, int x) +{ + if (unlink(name) < 0) + syswarn("cannot unlink %s", name); + exit(x); +} + + +/* +** Print a usage message and exit. +*/ +static void +Usage(void) +{ + fprintf(stderr, "Usage: shlock [-u|-b] -f file -p pid\n"); + exit(1); +} + + +int +main(int ac, char *av[]) +{ + int i; + char *p; + int fd; + char tmp[BUFSIZ]; + char buff[BUFSIZ]; + char *name; + pid_t pid; + bool ok; + bool JustChecking; + + /* Establish our identity. */ + message_program_name = "shlock"; + + /* Set defaults. */ + pid = 0; + name = NULL; + JustChecking = false; + umask(NEWSUMASK); + + /* Parse JCL. */ + while ((i = getopt(ac, av, "bcup:f:")) != EOF) + switch (i) { + default: + Usage(); + /* NOTREACHED */ + case 'b': + case 'u': + BinaryLock = true; + break; + case 'c': + JustChecking = true; + break; + case 'p': + pid = (pid_t) atol(optarg); + break; + case 'f': + name = optarg; + break; + } + ac -= optind; + av += optind; + if (ac || pid == 0 || name == NULL) + Usage(); + + /* Create the temp file in the same directory as the destination. */ + if ((p = strrchr(name, '/')) != NULL) { + *p = '\0'; + snprintf(tmp, sizeof(tmp), "%s/shlock%ld", name, (long)getpid()); + *p = '/'; + } + else + snprintf(tmp, sizeof(tmp), "shlock%ld", (long)getpid()); + + /* Loop until we can open the file. */ + while ((fd = open(tmp, O_RDWR | O_CREAT | O_EXCL, 0644)) < 0) + switch (errno) { + default: + /* Unknown error -- give up. */ + sysdie("cannot open %s", tmp); + case EEXIST: + /* If we can remove the old temporary, retry the open. */ + if (unlink(tmp) < 0) + sysdie("cannot unlink %s", tmp); + break; + } + + /* Write the process ID. */ + if (BinaryLock) + ok = write(fd, &pid, sizeof pid) == sizeof pid; + else { + snprintf(buff, sizeof(buff), "%ld\n", (long) pid); + i = strlen(buff); + ok = write(fd, buff, i) == i; + } + if (!ok) { + syswarn("cannot write PID to %s", tmp); + close(fd); + UnlinkAndExit(tmp, 1); + } + + close(fd); + + /* Handle the "-c" flag. */ + if (JustChecking) { + if (ValidLock(name, true)) + UnlinkAndExit(tmp, 1); + UnlinkAndExit(tmp, 0); + } + + /* Try to link the temporary to the lockfile. */ + while (link(tmp, name) < 0) + switch (errno) { + default: + /* Unknown error -- give up. */ + syswarn("cannot link %s to %s", tmp, name); + UnlinkAndExit(tmp, 1); + /* NOTREACHED */ + case EEXIST: + /* File exists; if lock is valid, give up. */ + if (ValidLock(name, false)) + UnlinkAndExit(tmp, 1); + if (unlink(name) < 0) { + syswarn("cannot unlink %s", name); + UnlinkAndExit(tmp, 1); + } + } + + UnlinkAndExit(tmp, 0); + /* NOTREACHED */ + return 1; +} diff --git a/backends/shrinkfile.c b/backends/shrinkfile.c new file mode 100644 index 0000000..7702ff2 --- /dev/null +++ b/backends/shrinkfile.c @@ -0,0 +1,390 @@ +/* $Id: shrinkfile.c 6135 2003-01-19 01:15:40Z rra $ +** +** Shrink files on line boundaries. +** +** Written by Landon Curt Noll , and placed in the +** public domain. Rewritten for INN by Rich Salz. +** +** Usage: +** shrinkfile [-n] [-s size [-m maxsize]] [-v] file... +** -n No writes, exit 0 if any file is too large, 1 otherwise +** -s size Truncation size (0 default); suffix may be k, m, +** or g to scale. Must not be larger than 2^31 - 1. +** -m maxsize Maximum size allowed before truncation. If maxsize +** <= size, then it is reset to size. Default == size. +** -v Print status line. +** +** Files will be shrunk an end of line boundary. In no case will the +** file be longer than size bytes if it was longer than maxsize bytes. +** If the first line is longer than the absolute value of size, the file +** will be truncated to zero length. +** +** The -n flag may be used to determine of any file is too large. No +** files will be altered in this mode. +*/ + +#include "config.h" +#include "clibrary.h" +#include +#include +#include +#include + +#include "inn/innconf.h" +#include "inn/messages.h" +#include "libinn.h" + +#define MAX_SIZE 0x7fffffffUL + + +/* +** Open a safe unique temporary file that will go away when closed. +*/ +static FILE * +OpenTemp(void) +{ + FILE *F; + char *filename; + int fd; + + filename = concatpath(innconf->pathtmp, "shrinkXXXXXX"); + fd = mkstemp(filename); + if (fd < 0) + sysdie("cannot create temporary file"); + F = fdopen(fd, "w+"); + if (F == NULL) + sysdie("cannot fdopen %s", filename); + unlink(filename); + free(filename); + return F; +} + + +/* +** Does file end with \n? Assume it does on I/O error, to avoid doing I/O. +*/ +static int +EndsWithNewline(FILE *F) +{ + int c; + + if (fseeko(F, 1, SEEK_END) < 0) { + syswarn("cannot seek to end of file"); + return true; + } + + /* return the actual character or EOF */ + if ((c = fgetc(F)) == EOF) { + if (ferror(F)) + syswarn("cannot read last byte"); + return true; + } + return c == '\n'; +} + + +/* +** Add a newline to location of a file. +*/ +static bool +AppendNewline(char *name) +{ + FILE *F; + + if ((F = xfopena(name)) == NULL) { + syswarn("cannot add newline"); + return false; + } + + if (fputc('\n', F) == EOF + || fflush(F) == EOF + || ferror(F) + || fclose(F) == EOF) { + syswarn("cannot add newline"); + return false; + } + + return true; +} + +/* +** Just check if it is too big +*/ +static bool +TooBig(FILE *F, off_t maxsize) +{ + struct stat Sb; + + /* Get the file's size. */ + if (fstat((int)fileno(F), &Sb) < 0) { + syswarn("cannot fstat"); + return false; + } + + /* return true if too large */ + return (maxsize > Sb.st_size ? false : true); +} + +/* +** This routine does all the work. +*/ +static bool +Process(FILE *F, char *name, off_t size, off_t maxsize, bool *Changedp) +{ + off_t len; + FILE *tmp; + struct stat Sb; + char buff[BUFSIZ + 1]; + int c; + size_t i; + bool err; + + /* Get the file's size. */ + if (fstat((int)fileno(F), &Sb) < 0) { + syswarn("cannot fstat"); + return false; + } + len = Sb.st_size; + + /* Process a zero size request. */ + if (size == 0 && len > maxsize) { + if (len > 0) { + fclose(F); + if ((F = fopen(name, "w")) == NULL) { + syswarn("cannot overwrite"); + return false; + } + fclose(F); + *Changedp = true; + } + return true; + } + + /* See if already small enough. */ + if (len <= maxsize) { + /* Newline already present? */ + if (EndsWithNewline(F)) { + fclose(F); + return true; + } + + /* No newline, add it if it fits. */ + if (len < size - 1) { + fclose(F); + *Changedp = true; + return AppendNewline(name); + } + } + else if (!EndsWithNewline(F)) { + if (!AppendNewline(name)) { + fclose(F); + return false; + } + } + + /* We now have a file that ends with a newline that is bigger than + * we want. Starting from {size} bytes from end, move forward + * until we get a newline. */ + if (fseeko(F, -size, SEEK_END) < 0) { + syswarn("cannot fseeko"); + fclose(F); + return false; + } + + while ((c = getc(F)) != '\n') + if (c == EOF) { + syswarn("cannot read"); + fclose(F); + return false; + } + + /* Copy rest of file to temp. */ + tmp = OpenTemp(); + err = false; + while ((i = fread(buff, 1, sizeof buff, F)) > 0) + if (fwrite(buff, 1, i, tmp) != i) { + err = true; + break; + } + if (err) { + syswarn("cannot copy to temporary file"); + fclose(F); + fclose(tmp); + return false; + } + + /* Now copy temp back to original file. */ + fclose(F); + if ((F = fopen(name, "w")) == NULL) { + syswarn("cannot overwrite file"); + fclose(tmp); + return false; + } + fseeko(tmp, 0, SEEK_SET); + + while ((i = fread(buff, 1, sizeof buff, tmp)) > 0) + if (fwrite(buff, 1, i, F) != i) { + err = true; + break; + } + if (err) { + syswarn("cannot overwrite file"); + fclose(F); + fclose(tmp); + return false; + } + + fclose(F); + fclose(tmp); + *Changedp = true; + return true; +} + + +/* +** Convert size argument to numeric value. Return -1 on error. +*/ +static off_t +ParseSize(char *p) +{ + off_t scale; + unsigned long str_num; + char *q; + + /* Skip leading spaces */ + while (ISWHITE(*p)) + p++; + if (*p == '\0') + return -1; + + /* determine the scaling factor */ + q = &p[strlen(p) - 1]; + switch (*q) { + default: + return -1; + case '0': case '1': case '2': case '3': case '4': + case '5': case '6': case '7': case '8': case '9': + scale = 1; + break; + case 'k': case 'K': + scale = 1024; + *q = '\0'; + break; + case 'm': case 'M': + scale = 1024 * 1024; + *q = '\0'; + break; + case 'g': case 'G': + scale = 1024 * 1024 * 1024; + *q = '\0'; + break; + } + + /* Convert string to number. */ + if (sscanf(p, "%lud", &str_num) != 1) + return -1; + if (str_num > MAX_SIZE / scale) + die("size is too big"); + + return scale * str_num; +} + + +/* +** Print usage message and exit. +*/ +static void +Usage(void) +{ + fprintf(stderr, + "Usage: shrinkfile [-n] [ -m maxsize ] [-s size] [-v] file..."); + exit(1); +} + + +int +main(int ac, char *av[]) +{ + bool Changed; + bool Verbose; + bool no_op; + FILE *F; + char *p; + int i; + off_t size = 0; + off_t maxsize = 0; + + /* First thing, set up our identity. */ + message_program_name = "shrinkfile"; + + /* Set defaults. */ + Verbose = false; + no_op = false; + umask(NEWSUMASK); + + if (!innconf_read(NULL)) + exit(1); + + /* Parse JCL. */ + while ((i = getopt(ac, av, "m:s:vn")) != EOF) + switch (i) { + default: + Usage(); + /* NOTREACHED */ + case 'n': + no_op = true; + break; + case 'm': + if ((maxsize = ParseSize(optarg)) < 0) + Usage(); + break; + case 's': + if ((size = ParseSize(optarg)) < 0) + Usage(); + break; + case 'v': + Verbose = true; + break; + } + if (maxsize < size) { + maxsize = size; + } + ac -= optind; + av += optind; + if (ac == 0) + Usage(); + + while ((p = *av++) != NULL) { + if ((F = fopen(p, "r")) == NULL) { + syswarn("cannot open %s", p); + continue; + } + + /* -n (no_op) or normal processing */ + if (no_op) { + + /* check if too big and exit zero if it is */ + if (TooBig(F, maxsize)) { + if (Verbose) + notice("%s is too large", p); + exit(0); + /* NOTREACHED */ + } + + /* no -n, do some real work */ + } else { + Changed = false; + if (!Process(F, p, size, maxsize, &Changed)) + syswarn("cannot shrink %s", p); + else if (Verbose && Changed) + notice("shrunk %s", p); + } + } + if (no_op && Verbose) { + notice("did not find a file that was too large"); + } + + /* if -n, then exit non-zero to indicate no file too big */ + exit(no_op ? 1 : 0); + /* NOTREACHED */ +} diff --git a/configure b/configure new file mode 100755 index 0000000..c4cf14e --- /dev/null +++ b/configure @@ -0,0 +1,11907 @@ +#! /bin/sh + +# From configure.in Revision: 7466 +# libtool.m4 - Configure libtool for the host system. -*-Shell-script-*- +## Copyright 1996, 1997, 1998, 1999, 2000, 2001 +## Free Software Foundation, Inc. +## Originally by Gordon Matzigkeit , 1996 +## +## This program is free software; you can redistribute it and/or modify +## it under the terms of the GNU General Public License as published by +## the Free Software Foundation; either version 2 of the License, or +## (at your option) any later version. +## +## This program is distributed in the hope that it will be useful, but +## WITHOUT ANY WARRANTY; without even the implied warranty of +## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +## General Public License for more details. +## +## You should have received a copy of the GNU General Public License +## along with this program; if not, write to the Free Software +## Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +## +## As a special exception to the GNU General Public License, if you +## distribute this file as part of a program that contains a +## configuration script generated by Autoconf, you may include it under +## the same distribution terms that you use for the rest of that program. + +# serial 46 AC_PROG_LIBTOOL + + + + + +# AC_LIBTOOL_HEADER_ASSERT +# ------------------------ +# AC_LIBTOOL_HEADER_ASSERT + +# _LT_AC_CHECK_DLFCN +# -------------------- +# _LT_AC_CHECK_DLFCN + +# AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE +# --------------------------------- + # AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE + +# _LT_AC_LIBTOOL_SYS_PATH_SEPARATOR +# --------------------------------- +# _LT_AC_LIBTOOL_SYS_PATH_SEPARATOR + +# _LT_AC_PROG_ECHO_BACKSLASH +# -------------------------- +# Add some code to the start of the generated configure script which +# will find an echo command which doesn't interpret backslashes. +# _LT_AC_PROG_ECHO_BACKSLASH + +# _LT_AC_TRY_DLOPEN_SELF (ACTION-IF-TRUE, ACTION-IF-TRUE-W-USCORE, +# ACTION-IF-FALSE, ACTION-IF-CROSS-COMPILING) +# ------------------------------------------------------------------ +# _LT_AC_TRY_DLOPEN_SELF + +# AC_LIBTOOL_DLOPEN_SELF +# ------------------- +# AC_LIBTOOL_DLOPEN_SELF + +# _LT_AC_LTCONFIG_HACK + +# AC_LIBTOOL_DLOPEN - enable checks for dlopen support + + +# AC_LIBTOOL_WIN32_DLL - declare package support for building win32 dll's + + +# AC_ENABLE_SHARED - implement the --enable-shared flag +# Usage: AC_ENABLE_SHARED[(DEFAULT)] +# Where DEFAULT is either `yes' or `no'. If omitted, it defaults to +# `yes'. + + +# AC_DISABLE_SHARED - set the default shared flag to --disable-shared + + +# AC_ENABLE_STATIC - implement the --enable-static flag +# Usage: AC_ENABLE_STATIC[(DEFAULT)] +# Where DEFAULT is either `yes' or `no'. If omitted, it defaults to +# `yes'. + + +# AC_DISABLE_STATIC - set the default static flag to --disable-static + + + +# AC_ENABLE_FAST_INSTALL - implement the --enable-fast-install flag +# Usage: AC_ENABLE_FAST_INSTALL[(DEFAULT)] +# Where DEFAULT is either `yes' or `no'. If omitted, it defaults to +# `yes'. + + +# AC_DISABLE_FAST_INSTALL - set the default to --disable-fast-install + + +# AC_LIBTOOL_PICMODE - implement the --with-pic flag +# Usage: AC_LIBTOOL_PICMODE[(MODE)] +# Where MODE is either `yes' or `no'. If omitted, it defaults to +# `both'. + + + +# AC_PATH_TOOL_PREFIX - find a file program which can recognise shared library + + + +# AC_PATH_MAGIC - find a file program which can recognise a shared library + + + +# AC_PROG_LD - find the path to the GNU or non-GNU linker + + +# AC_PROG_LD_GNU - + + +# AC_PROG_LD_RELOAD_FLAG - find reload flag for linker +# -- PORTME Some linkers may need a different reload flag. + + +# AC_DEPLIBS_CHECK_METHOD - how to check for library dependencies +# -- PORTME fill in with the dynamic library characteristics + + + +# AC_PROG_NM - find the path to a BSD-compatible name lister + + +# AC_CHECK_LIBM - check for math library + + +# AC_LIBLTDL_CONVENIENCE[(dir)] - sets LIBLTDL to the link flags for +# the libltdl convenience library and INCLTDL to the include flags for +# the libltdl header and adds --enable-ltdl-convenience to the +# configure arguments. Note that LIBLTDL and INCLTDL are not +# AC_SUBSTed, nor is AC_CONFIG_SUBDIRS called. If DIR is not +# provided, it is assumed to be `libltdl'. LIBLTDL will be prefixed +# with '${top_builddir}/' and INCLTDL will be prefixed with +# '${top_srcdir}/' (note the single quotes!). If your package is not +# flat and you're not using automake, define top_builddir and +# top_srcdir appropriately in the Makefiles. + + +# AC_LIBLTDL_INSTALLABLE[(dir)] - sets LIBLTDL to the link flags for +# the libltdl installable library and INCLTDL to the include flags for +# the libltdl header and adds --enable-ltdl-install to the configure +# arguments. Note that LIBLTDL and INCLTDL are not AC_SUBSTed, nor is +# AC_CONFIG_SUBDIRS called. If DIR is not provided and an installed +# libltdl is not found, it is assumed to be `libltdl'. LIBLTDL will +# be prefixed with '${top_builddir}/' and INCLTDL will be prefixed +# with '${top_srcdir}/' (note the single quotes!). If your package is +# not flat and you're not using automake, define top_builddir and +# top_srcdir appropriately in the Makefiles. +# In the future, this macro may have to be called after AC_PROG_LIBTOOL. + + +# old names + + + + + + + + +# This is just to silence aclocal about the macro not being used + +# Guess values for system-dependent variables and create Makefiles. +# Generated automatically using autoconf version 2.13 +# Copyright (C) 1992, 93, 94, 95, 96 Free Software Foundation, Inc. +# +# This configure script is free software; the Free Software Foundation +# gives unlimited permission to copy, distribute and modify it. + +# Defaults: +ac_help= +ac_default_prefix=/usr/local +# Any additions from configure.in: +ac_default_prefix=/usr/local/news +ac_help="$ac_help + --enable-libtool Use libtool for lib generation [default=no]" +ac_help="$ac_help + --enable-shared[=PKGS] build shared libraries [default=yes]" +ac_help="$ac_help + --enable-static[=PKGS] build static libraries [default=yes]" +ac_help="$ac_help + --enable-fast-install[=PKGS] optimize for fast installation [default=yes]" +ac_help="$ac_help + --with-gnu-ld assume the C compiler uses GNU ld [default=no]" + +# Find the correct PATH separator. Usually this is `:', but +# DJGPP uses `;' like DOS. +if test "X${PATH_SEPARATOR+set}" != Xset; then + UNAME=${UNAME-`uname 2>/dev/null`} + case X$UNAME in + *-DOS) lt_cv_sys_path_separator=';' ;; + *) lt_cv_sys_path_separator=':' ;; + esac + PATH_SEPARATOR=$lt_cv_sys_path_separator +fi + + +# Check that we are running under the correct shell. +SHELL=${CONFIG_SHELL-/bin/sh} + +case X$ECHO in +X*--fallback-echo) + # Remove one level of quotation (which was required for Make). + ECHO=`echo "$ECHO" | sed 's,\\\\\$\\$0,'$0','` + ;; +esac + +echo=${ECHO-echo} +if test "X$1" = X--no-reexec; then + # Discard the --no-reexec flag, and continue. + shift +elif test "X$1" = X--fallback-echo; then + # Avoid inline document here, it may be left over + : +elif test "X`($echo '\t') 2>/dev/null`" = 'X\t'; then + # Yippee, $echo works! + : +else + # Restart under the correct shell. + exec $SHELL "$0" --no-reexec ${1+"$@"} +fi + +if test "X$1" = X--fallback-echo; then + # used as fallback echo + shift + cat </dev/null && + echo_test_string="`eval $cmd`" && + (test "X$echo_test_string" = "X$echo_test_string") 2>/dev/null + then + break + fi + done +fi + +if test "X`($echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + : +else + # The Solaris, AIX, and Digital Unix default echo programs unquote + # backslashes. This makes it impossible to quote backslashes using + # echo "$something" | sed 's/\\/\\\\/g' + # + # So, first we look for a working echo in the user's PATH. + + IFS="${IFS= }"; save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for dir in $PATH /usr/ucb; do + if (test -f $dir/echo || test -f $dir/echo$ac_exeext) && + test "X`($dir/echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($dir/echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + echo="$dir/echo" + break + fi + done + IFS="$save_ifs" + + if test "X$echo" = Xecho; then + # We didn't find a better echo, so look for alternatives. + if test "X`(print -r '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`(print -r "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + # This shell has a builtin print -r that does the trick. + echo='print -r' + elif (test -f /bin/ksh || test -f /bin/ksh$ac_exeext) && + test "X$CONFIG_SHELL" != X/bin/ksh; then + # If we have ksh, try running configure again with it. + ORIGINAL_CONFIG_SHELL=${CONFIG_SHELL-/bin/sh} + export ORIGINAL_CONFIG_SHELL + CONFIG_SHELL=/bin/ksh + export CONFIG_SHELL + exec $CONFIG_SHELL "$0" --no-reexec ${1+"$@"} + else + # Try using printf. + echo='printf %s\n' + if test "X`($echo '\t') 2>/dev/null`" = 'X\t' && + echo_testing_string=`($echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + # Cool, printf works + : + elif echo_testing_string=`($ORIGINAL_CONFIG_SHELL "$0" --fallback-echo '\t') 2>/dev/null` && + test "X$echo_testing_string" = 'X\t' && + echo_testing_string=`($ORIGINAL_CONFIG_SHELL "$0" --fallback-echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + CONFIG_SHELL=$ORIGINAL_CONFIG_SHELL + export CONFIG_SHELL + SHELL="$CONFIG_SHELL" + export SHELL + echo="$CONFIG_SHELL $0 --fallback-echo" + elif echo_testing_string=`($CONFIG_SHELL "$0" --fallback-echo '\t') 2>/dev/null` && + test "X$echo_testing_string" = 'X\t' && + echo_testing_string=`($CONFIG_SHELL "$0" --fallback-echo "$echo_test_string") 2>/dev/null` && + test "X$echo_testing_string" = "X$echo_test_string"; then + echo="$CONFIG_SHELL $0 --fallback-echo" + else + # maybe with a smaller string... + prev=: + + for cmd in 'echo test' 'sed 2q "$0"' 'sed 10q "$0"' 'sed 20q "$0"' 'sed 50q "$0"'; do + if (test "X$echo_test_string" = "X`eval $cmd`") 2>/dev/null + then + break + fi + prev="$cmd" + done + + if test "$prev" != 'sed 50q "$0"'; then + echo_test_string=`eval $prev` + export echo_test_string + exec ${ORIGINAL_CONFIG_SHELL-${CONFIG_SHELL-/bin/sh}} "$0" ${1+"$@"} + else + # Oops. We lost completely, so just stick with echo. + echo=echo + fi + fi + fi + fi +fi +fi + +# Copy echo and quote the copy suitably for passing to libtool from +# the Makefile, instead of quoting the original, which is used later. +ECHO=$echo +if test "X$ECHO" = "X$CONFIG_SHELL $0 --fallback-echo"; then + ECHO="$CONFIG_SHELL \\\$\$0 --fallback-echo" +fi + + +ac_help="$ac_help + --disable-libtool-lock avoid locking (might break parallel builds)" +ac_help="$ac_help + --with-pic try to use only PIC/non-PIC objects [default=use both]" +ac_help="$ac_help + --with-control-dir=PATH Path for control programs [PREFIX/bin/control]" +ac_help="$ac_help + --with-db-dir=PATH Path for news database files [PREFIX/db]" +ac_help="$ac_help + --with-doc-dir=PATH Path for news documentation [PREFIX/doc]" +ac_help="$ac_help + --with-etc-dir=PATH Path for news config files [PREFIX/etc]" +ac_help="$ac_help + --with-filter-dir=PATH Path for embedded filters [PREFIX/bin/filter]" +ac_help="$ac_help + --with-lib-dir=PATH Path for news library files [PREFIX/lib]" +ac_help="$ac_help + --with-log-dir=PATH Path for news logs [PREFIX/log]" +ac_help="$ac_help + --with-run-dir=PATH Path for news PID/runtime files [PREFIX/run]" +ac_help="$ac_help + --with-spool-dir=PATH Path for news storage [PREFIX/spool]" +ac_help="$ac_help + --with-tmp-dir=PATH Path for temporary files [PREFIX/tmp]" +ac_help="$ac_help + --with-syslog-facility=LOG_FAC Syslog facility [LOG_NEWS or LOG_LOCAL1]" +ac_help="$ac_help + --with-news-user=USER News user name [news]" +ac_help="$ac_help + --with-news-group=GROUP News group name [news]" +ac_help="$ac_help + --with-news-master=USER News master (address for reports) [usenet]" +ac_help="$ac_help + --with-news-umask=UMASK umask for news files [002]" +ac_help="$ac_help + --enable-setgid-inews Install inews setgid" +ac_help="$ac_help + --enable-uucp-rnews Install rnews setuid, group uucp" +ac_help="$ac_help + --with-log-compress=METHOD Log compression method [gzip]" +ac_help="$ac_help + --with-innd-port=PORT Additional low-numbered port for inndstart" +ac_help="$ac_help + --enable-ipv6 Enable IPv6 support" +ac_help="$ac_help + --with-max-sockets=N Maximum number of listening sockets in innd" +ac_help="$ac_help + --enable-tagged-hash Use tagged hash table for history" +ac_help="$ac_help + --enable-keywords Automatic keyword generation support" +ac_help="$ac_help + --enable-largefiles Support for files larger than 2GB [default=no]" +ac_help="$ac_help + --with-sendmail=PATH Path to sendmail" +ac_help="$ac_help + --with-kerberos=PATH Path to Kerberos v5 (for auth_krb5)" +ac_help="$ac_help + --with-perl Embedded Perl script support [default=no]" +ac_help="$ac_help + --with-python Embedded Python module support [default=no]" +ac_help="$ac_help + --with-berkeleydb[=PATH] Enable BerkeleyDB (for ovdb overview method)" +ac_help="$ac_help + --with-openssl=PATH Enable OpenSSL (for NNTP over SSL support)" +ac_help="$ac_help + --with-sasl=PATH Enable SASL (for imapfeed authentication)" + +# Initialize some variables set by options. +# The variables have the same names as the options, with +# dashes changed to underlines. +build=NONE +cache_file=./config.cache +exec_prefix=NONE +host=NONE +no_create= +nonopt=NONE +no_recursion= +prefix=NONE +program_prefix=NONE +program_suffix=NONE +program_transform_name=s,x,x, +silent= +site= +srcdir= +target=NONE +verbose= +x_includes=NONE +x_libraries=NONE +bindir='${exec_prefix}/bin' +sbindir='${exec_prefix}/sbin' +libexecdir='${exec_prefix}/libexec' +datadir='${prefix}/share' +sysconfdir='${prefix}/etc' +sharedstatedir='${prefix}/com' +localstatedir='${prefix}/var' +libdir='${exec_prefix}/lib' +includedir='${prefix}/include' +oldincludedir='/usr/include' +infodir='${prefix}/info' +mandir='${prefix}/man' + +# Initialize some other variables. +subdirs= +MFLAGS= MAKEFLAGS= +SHELL=${CONFIG_SHELL-/bin/sh} +# Maximum number of lines to put in a shell here document. +ac_max_here_lines=12 + +ac_prev= +for ac_option +do + + # If the previous option needs an argument, assign it. + if test -n "$ac_prev"; then + eval "$ac_prev=\$ac_option" + ac_prev= + continue + fi + + case "$ac_option" in + -*=*) ac_optarg=`echo "$ac_option" | sed 's/[-_a-zA-Z0-9]*=//'` ;; + *) ac_optarg= ;; + esac + + # Accept the important Cygnus configure options, so we can diagnose typos. + + case "$ac_option" in + + -bindir | --bindir | --bindi | --bind | --bin | --bi) + ac_prev=bindir ;; + -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) + bindir="$ac_optarg" ;; + + -build | --build | --buil | --bui | --bu) + ac_prev=build ;; + -build=* | --build=* | --buil=* | --bui=* | --bu=*) + build="$ac_optarg" ;; + + -cache-file | --cache-file | --cache-fil | --cache-fi \ + | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) + ac_prev=cache_file ;; + -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ + | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) + cache_file="$ac_optarg" ;; + + -datadir | --datadir | --datadi | --datad | --data | --dat | --da) + ac_prev=datadir ;; + -datadir=* | --datadir=* | --datadi=* | --datad=* | --data=* | --dat=* \ + | --da=*) + datadir="$ac_optarg" ;; + + -disable-* | --disable-*) + ac_feature=`echo $ac_option|sed -e 's/-*disable-//'` + # Reject names that are not valid shell variable names. + if test -n "`echo $ac_feature| sed 's/[-a-zA-Z0-9_]//g'`"; then + { echo "configure: error: $ac_feature: invalid feature name" 1>&2; exit 1; } + fi + ac_feature=`echo $ac_feature| sed 's/-/_/g'` + eval "enable_${ac_feature}=no" ;; + + -enable-* | --enable-*) + ac_feature=`echo $ac_option|sed -e 's/-*enable-//' -e 's/=.*//'` + # Reject names that are not valid shell variable names. + if test -n "`echo $ac_feature| sed 's/[-_a-zA-Z0-9]//g'`"; then + { echo "configure: error: $ac_feature: invalid feature name" 1>&2; exit 1; } + fi + ac_feature=`echo $ac_feature| sed 's/-/_/g'` + case "$ac_option" in + *=*) ;; + *) ac_optarg=yes ;; + esac + eval "enable_${ac_feature}='$ac_optarg'" ;; + + -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ + | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ + | --exec | --exe | --ex) + ac_prev=exec_prefix ;; + -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ + | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ + | --exec=* | --exe=* | --ex=*) + exec_prefix="$ac_optarg" ;; + + -gas | --gas | --ga | --g) + # Obsolete; use --with-gas. + with_gas=yes ;; + + -help | --help | --hel | --he) + # Omit some internal or obsolete options to make the list less imposing. + # This message is too long to be a string in the A/UX 3.1 sh. + cat << EOF +Usage: configure [options] [host] +Options: [defaults in brackets after descriptions] +Configuration: + --cache-file=FILE cache test results in FILE + --help print this message + --no-create do not create output files + --quiet, --silent do not print \`checking...' messages + --version print the version of autoconf that created configure +Directory and file names: + --prefix=PREFIX install architecture-independent files in PREFIX + [$ac_default_prefix] + --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX + [same as prefix] + --bindir=DIR user executables in DIR [EPREFIX/bin] + --sbindir=DIR system admin executables in DIR [EPREFIX/sbin] + --libexecdir=DIR program executables in DIR [EPREFIX/libexec] + --datadir=DIR read-only architecture-independent data in DIR + [PREFIX/share] + --sysconfdir=DIR read-only single-machine data in DIR [PREFIX/etc] + --sharedstatedir=DIR modifiable architecture-independent data in DIR + [PREFIX/com] + --localstatedir=DIR modifiable single-machine data in DIR [PREFIX/var] + --libdir=DIR object code libraries in DIR [EPREFIX/lib] + --includedir=DIR C header files in DIR [PREFIX/include] + --oldincludedir=DIR C header files for non-gcc in DIR [/usr/include] + --infodir=DIR info documentation in DIR [PREFIX/info] + --mandir=DIR man documentation in DIR [PREFIX/man] + --srcdir=DIR find the sources in DIR [configure dir or ..] + --program-prefix=PREFIX prepend PREFIX to installed program names + --program-suffix=SUFFIX append SUFFIX to installed program names + --program-transform-name=PROGRAM + run sed PROGRAM on installed program names +EOF + cat << EOF +Host type: + --build=BUILD configure for building on BUILD [BUILD=HOST] + --host=HOST configure for HOST [guessed] + --target=TARGET configure for TARGET [TARGET=HOST] +Features and packages: + --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) + --enable-FEATURE[=ARG] include FEATURE [ARG=yes] + --with-PACKAGE[=ARG] use PACKAGE [ARG=yes] + --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no) + --x-includes=DIR X include files are in DIR + --x-libraries=DIR X library files are in DIR +EOF + if test -n "$ac_help"; then + echo "--enable and --with options recognized:$ac_help" + fi + exit 0 ;; + + -host | --host | --hos | --ho) + ac_prev=host ;; + -host=* | --host=* | --hos=* | --ho=*) + host="$ac_optarg" ;; + + -includedir | --includedir | --includedi | --included | --include \ + | --includ | --inclu | --incl | --inc) + ac_prev=includedir ;; + -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ + | --includ=* | --inclu=* | --incl=* | --inc=*) + includedir="$ac_optarg" ;; + + -infodir | --infodir | --infodi | --infod | --info | --inf) + ac_prev=infodir ;; + -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) + infodir="$ac_optarg" ;; + + -libdir | --libdir | --libdi | --libd) + ac_prev=libdir ;; + -libdir=* | --libdir=* | --libdi=* | --libd=*) + libdir="$ac_optarg" ;; + + -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ + | --libexe | --libex | --libe) + ac_prev=libexecdir ;; + -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ + | --libexe=* | --libex=* | --libe=*) + libexecdir="$ac_optarg" ;; + + -localstatedir | --localstatedir | --localstatedi | --localstated \ + | --localstate | --localstat | --localsta | --localst \ + | --locals | --local | --loca | --loc | --lo) + ac_prev=localstatedir ;; + -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ + | --localstate=* | --localstat=* | --localsta=* | --localst=* \ + | --locals=* | --local=* | --loca=* | --loc=* | --lo=*) + localstatedir="$ac_optarg" ;; + + -mandir | --mandir | --mandi | --mand | --man | --ma | --m) + ac_prev=mandir ;; + -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) + mandir="$ac_optarg" ;; + + -nfp | --nfp | --nf) + # Obsolete; use --without-fp. + with_fp=no ;; + + -no-create | --no-create | --no-creat | --no-crea | --no-cre \ + | --no-cr | --no-c) + no_create=yes ;; + + -no-recursion | --no-recursion | --no-recursio | --no-recursi \ + | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) + no_recursion=yes ;; + + -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ + | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ + | --oldin | --oldi | --old | --ol | --o) + ac_prev=oldincludedir ;; + -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ + | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ + | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) + oldincludedir="$ac_optarg" ;; + + -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) + ac_prev=prefix ;; + -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) + prefix="$ac_optarg" ;; + + -program-prefix | --program-prefix | --program-prefi | --program-pref \ + | --program-pre | --program-pr | --program-p) + ac_prev=program_prefix ;; + -program-prefix=* | --program-prefix=* | --program-prefi=* \ + | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) + program_prefix="$ac_optarg" ;; + + -program-suffix | --program-suffix | --program-suffi | --program-suff \ + | --program-suf | --program-su | --program-s) + ac_prev=program_suffix ;; + -program-suffix=* | --program-suffix=* | --program-suffi=* \ + | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) + program_suffix="$ac_optarg" ;; + + -program-transform-name | --program-transform-name \ + | --program-transform-nam | --program-transform-na \ + | --program-transform-n | --program-transform- \ + | --program-transform | --program-transfor \ + | --program-transfo | --program-transf \ + | --program-trans | --program-tran \ + | --progr-tra | --program-tr | --program-t) + ac_prev=program_transform_name ;; + -program-transform-name=* | --program-transform-name=* \ + | --program-transform-nam=* | --program-transform-na=* \ + | --program-transform-n=* | --program-transform-=* \ + | --program-transform=* | --program-transfor=* \ + | --program-transfo=* | --program-transf=* \ + | --program-trans=* | --program-tran=* \ + | --progr-tra=* | --program-tr=* | --program-t=*) + program_transform_name="$ac_optarg" ;; + + -q | -quiet | --quiet | --quie | --qui | --qu | --q \ + | -silent | --silent | --silen | --sile | --sil) + silent=yes ;; + + -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) + ac_prev=sbindir ;; + -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ + | --sbi=* | --sb=*) + sbindir="$ac_optarg" ;; + + -sharedstatedir | --sharedstatedir | --sharedstatedi \ + | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ + | --sharedst | --shareds | --shared | --share | --shar \ + | --sha | --sh) + ac_prev=sharedstatedir ;; + -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ + | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ + | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ + | --sha=* | --sh=*) + sharedstatedir="$ac_optarg" ;; + + -site | --site | --sit) + ac_prev=site ;; + -site=* | --site=* | --sit=*) + site="$ac_optarg" ;; + + -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) + ac_prev=srcdir ;; + -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) + srcdir="$ac_optarg" ;; + + -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ + | --syscon | --sysco | --sysc | --sys | --sy) + ac_prev=sysconfdir ;; + -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ + | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) + sysconfdir="$ac_optarg" ;; + + -target | --target | --targe | --targ | --tar | --ta | --t) + ac_prev=target ;; + -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) + target="$ac_optarg" ;; + + -v | -verbose | --verbose | --verbos | --verbo | --verb) + verbose=yes ;; + + -version | --version | --versio | --versi | --vers) + echo "configure generated by autoconf version 2.13" + exit 0 ;; + + -with-* | --with-*) + ac_package=`echo $ac_option|sed -e 's/-*with-//' -e 's/=.*//'` + # Reject names that are not valid shell variable names. + if test -n "`echo $ac_package| sed 's/[-_a-zA-Z0-9]//g'`"; then + { echo "configure: error: $ac_package: invalid package name" 1>&2; exit 1; } + fi + ac_package=`echo $ac_package| sed 's/-/_/g'` + case "$ac_option" in + *=*) ;; + *) ac_optarg=yes ;; + esac + eval "with_${ac_package}='$ac_optarg'" ;; + + -without-* | --without-*) + ac_package=`echo $ac_option|sed -e 's/-*without-//'` + # Reject names that are not valid shell variable names. + if test -n "`echo $ac_package| sed 's/[-a-zA-Z0-9_]//g'`"; then + { echo "configure: error: $ac_package: invalid package name" 1>&2; exit 1; } + fi + ac_package=`echo $ac_package| sed 's/-/_/g'` + eval "with_${ac_package}=no" ;; + + --x) + # Obsolete; use --with-x. + with_x=yes ;; + + -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ + | --x-incl | --x-inc | --x-in | --x-i) + ac_prev=x_includes ;; + -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ + | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) + x_includes="$ac_optarg" ;; + + -x-libraries | --x-libraries | --x-librarie | --x-librari \ + | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) + ac_prev=x_libraries ;; + -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ + | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) + x_libraries="$ac_optarg" ;; + + -*) { echo "configure: error: $ac_option: invalid option; use --help to show usage" 1>&2; exit 1; } + ;; + + *) + if test -n "`echo $ac_option| sed 's/[-a-z0-9.]//g'`"; then + echo "configure: warning: $ac_option: invalid host type" 1>&2 + fi + if test "x$nonopt" != xNONE; then + { echo "configure: error: can only configure for one host and one target at a time" 1>&2; exit 1; } + fi + nonopt="$ac_option" + ;; + + esac +done + +if test -n "$ac_prev"; then + { echo "configure: error: missing argument to --`echo $ac_prev | sed 's/_/-/g'`" 1>&2; exit 1; } +fi + +trap 'rm -fr conftest* confdefs* core core.* *.core $ac_clean_files; exit 1' 1 2 15 + +# File descriptor usage: +# 0 standard input +# 1 file creation +# 2 errors and warnings +# 3 some systems may open it to /dev/tty +# 4 used on the Kubota Titan +# 6 checking for... messages and results +# 5 compiler messages saved in config.log +if test "$silent" = yes; then + exec 6>/dev/null +else + exec 6>&1 +fi +exec 5>./config.log + +echo "\ +This file contains any messages produced by compilers while +running configure, to aid debugging if configure makes a mistake. +" 1>&5 + +# Strip out --no-create and --no-recursion so they do not pile up. +# Also quote any args containing shell metacharacters. +ac_configure_args= +for ac_arg +do + case "$ac_arg" in + -no-create | --no-create | --no-creat | --no-crea | --no-cre \ + | --no-cr | --no-c) ;; + -no-recursion | --no-recursion | --no-recursio | --no-recursi \ + | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) ;; + *" "*|*" "*|*[\[\]\~\#\$\^\&\*\(\)\{\}\\\|\;\<\>\?]*) + ac_configure_args="$ac_configure_args '$ac_arg'" ;; + *) ac_configure_args="$ac_configure_args $ac_arg" ;; + esac +done + +# NLS nuisances. +# Only set these to C if already set. These must not be set unconditionally +# because not all systems understand e.g. LANG=C (notably SCO). +# Fixing LC_MESSAGES prevents Solaris sh from translating var values in `set'! +# Non-C LC_CTYPE values break the ctype check. +if test "${LANG+set}" = set; then LANG=C; export LANG; fi +if test "${LC_ALL+set}" = set; then LC_ALL=C; export LC_ALL; fi +if test "${LC_MESSAGES+set}" = set; then LC_MESSAGES=C; export LC_MESSAGES; fi +if test "${LC_CTYPE+set}" = set; then LC_CTYPE=C; export LC_CTYPE; fi + +# confdefs.h avoids OS command line length limits that DEFS can exceed. +rm -rf conftest* confdefs.h +# AIX cpp loses on an empty file, so make sure it contains at least a newline. +echo > confdefs.h + +# A filename unique to this package, relative to the directory that +# configure is in, which we can look for to find out if srcdir is correct. +ac_unique_file=Makefile.global.in + +# Find the source files, if location was not specified. +if test -z "$srcdir"; then + ac_srcdir_defaulted=yes + # Try the directory containing this script, then its parent. + ac_prog=$0 + ac_confdir=`echo $ac_prog|sed 's%/[^/][^/]*$%%'` + test "x$ac_confdir" = "x$ac_prog" && ac_confdir=. + srcdir=$ac_confdir + if test ! -r $srcdir/$ac_unique_file; then + srcdir=.. + fi +else + ac_srcdir_defaulted=no +fi +if test ! -r $srcdir/$ac_unique_file; then + if test "$ac_srcdir_defaulted" = yes; then + { echo "configure: error: can not find sources in $ac_confdir or .." 1>&2; exit 1; } + else + { echo "configure: error: can not find sources in $srcdir" 1>&2; exit 1; } + fi +fi +srcdir=`echo "${srcdir}" | sed 's%\([^/]\)/*$%\1%'` + +# Prefer explicitly selected file to automatically selected ones. +if test -z "$CONFIG_SITE"; then + if test "x$prefix" != xNONE; then + CONFIG_SITE="$prefix/share/config.site $prefix/etc/config.site" + else + CONFIG_SITE="$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site" + fi +fi +for ac_site_file in $CONFIG_SITE; do + if test -r "$ac_site_file"; then + echo "loading site script $ac_site_file" + . "$ac_site_file" + fi +done + +if test -r "$cache_file"; then + echo "loading cache $cache_file" + . $cache_file +else + echo "creating cache $cache_file" + > $cache_file +fi + +ac_ext=c +# CFLAGS is not in ac_cpp because -g, -O, etc. are not valid cpp options. +ac_cpp='$CPP $CPPFLAGS' +ac_compile='${CC-cc} -c $CFLAGS $CPPFLAGS conftest.$ac_ext 1>&5' +ac_link='${CC-cc} -o conftest${ac_exeext} $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS 1>&5' +cross_compiling=$ac_cv_prog_cc_cross + +ac_exeext= +ac_objext=o +if (echo "testing\c"; echo 1,2,3) | grep c >/dev/null; then + # Stardent Vistra SVR4 grep lacks -e, says ghazi@caip.rutgers.edu. + if (echo -n testing; echo 1,2,3) | sed s/-n/xn/ | grep xn >/dev/null; then + ac_n= ac_c=' +' ac_t=' ' + else + ac_n=-n ac_c= ac_t= + fi +else + ac_n= ac_c='\c' ac_t= +fi + + +ac_aux_dir= +for ac_dir in support $srcdir/support; do + if test -f $ac_dir/install-sh; then + ac_aux_dir=$ac_dir + ac_install_sh="$ac_aux_dir/install-sh -c" + break + elif test -f $ac_dir/install.sh; then + ac_aux_dir=$ac_dir + ac_install_sh="$ac_aux_dir/install.sh -c" + break + fi +done +if test -z "$ac_aux_dir"; then + { echo "configure: error: can not find install-sh or install.sh in support $srcdir/support" 1>&2; exit 1; } +fi +ac_config_guess=$ac_aux_dir/config.guess +ac_config_sub=$ac_aux_dir/config.sub +ac_configure=$ac_aux_dir/configure # This should be Cygnus configure. + + + +test x"$prefix" = xNONE && prefix="$ac_default_prefix" + +builddir=`pwd` + + +if test x"$with_largefiles" != x ; then + { echo "configure: error: Use --enable-largefiles instead of --with-largefiles" 1>&2; exit 1; } +fi + + + +# Extract the first word of "gcc", so it can be a program name with args. +set dummy gcc; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:966: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_CC'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$CC"; then + ac_cv_prog_CC="$CC" # Let the user override the test. +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_prog_CC="gcc" + break + fi + done + IFS="$ac_save_ifs" +fi +fi +CC="$ac_cv_prog_CC" +if test -n "$CC"; then + echo "$ac_t""$CC" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test -z "$CC"; then + # Extract the first word of "cc", so it can be a program name with args. +set dummy cc; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:996: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_CC'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$CC"; then + ac_cv_prog_CC="$CC" # Let the user override the test. +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_prog_rejected=no + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + if test "$ac_dir/$ac_word" = "/usr/ucb/cc"; then + ac_prog_rejected=yes + continue + fi + ac_cv_prog_CC="cc" + break + fi + done + IFS="$ac_save_ifs" +if test $ac_prog_rejected = yes; then + # We found a bogon in the path, so make sure we never use it. + set dummy $ac_cv_prog_CC + shift + if test $# -gt 0; then + # We chose a different compiler from the bogus one. + # However, it has the same basename, so the bogon will be chosen + # first if we set CC to just the basename; use the full file name. + shift + set dummy "$ac_dir/$ac_word" "$@" + shift + ac_cv_prog_CC="$@" + fi +fi +fi +fi +CC="$ac_cv_prog_CC" +if test -n "$CC"; then + echo "$ac_t""$CC" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + + if test -z "$CC"; then + case "`uname -s`" in + *win32* | *WIN32*) + # Extract the first word of "cl", so it can be a program name with args. +set dummy cl; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:1047: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_CC'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$CC"; then + ac_cv_prog_CC="$CC" # Let the user override the test. +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_prog_CC="cl" + break + fi + done + IFS="$ac_save_ifs" +fi +fi +CC="$ac_cv_prog_CC" +if test -n "$CC"; then + echo "$ac_t""$CC" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + ;; + esac + fi + test -z "$CC" && { echo "configure: error: no acceptable cc found in \$PATH" 1>&2; exit 1; } +fi + +echo $ac_n "checking whether the C compiler ($CC $CFLAGS $LDFLAGS) works""... $ac_c" 1>&6 +echo "configure:1079: checking whether the C compiler ($CC $CFLAGS $LDFLAGS) works" >&5 + +ac_ext=c +# CFLAGS is not in ac_cpp because -g, -O, etc. are not valid cpp options. +ac_cpp='$CPP $CPPFLAGS' +ac_compile='${CC-cc} -c $CFLAGS $CPPFLAGS conftest.$ac_ext 1>&5' +ac_link='${CC-cc} -o conftest${ac_exeext} $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS 1>&5' +cross_compiling=$ac_cv_prog_cc_cross + +cat > conftest.$ac_ext << EOF + +#line 1090 "configure" +#include "confdefs.h" + +main(){return(0);} +EOF +if { (eval echo configure:1095: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + ac_cv_prog_cc_works=yes + # If we can't run a trivial program, we are probably using a cross compiler. + if (./conftest; exit) 2>/dev/null; then + ac_cv_prog_cc_cross=no + else + ac_cv_prog_cc_cross=yes + fi +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + ac_cv_prog_cc_works=no +fi +rm -fr conftest* +ac_ext=c +# CFLAGS is not in ac_cpp because -g, -O, etc. are not valid cpp options. +ac_cpp='$CPP $CPPFLAGS' +ac_compile='${CC-cc} -c $CFLAGS $CPPFLAGS conftest.$ac_ext 1>&5' +ac_link='${CC-cc} -o conftest${ac_exeext} $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS 1>&5' +cross_compiling=$ac_cv_prog_cc_cross + +echo "$ac_t""$ac_cv_prog_cc_works" 1>&6 +if test $ac_cv_prog_cc_works = no; then + { echo "configure: error: installation or configuration problem: C compiler cannot create executables." 1>&2; exit 1; } +fi +echo $ac_n "checking whether the C compiler ($CC $CFLAGS $LDFLAGS) is a cross-compiler""... $ac_c" 1>&6 +echo "configure:1121: checking whether the C compiler ($CC $CFLAGS $LDFLAGS) is a cross-compiler" >&5 +echo "$ac_t""$ac_cv_prog_cc_cross" 1>&6 +cross_compiling=$ac_cv_prog_cc_cross + +echo $ac_n "checking whether we are using GNU C""... $ac_c" 1>&6 +echo "configure:1126: checking whether we are using GNU C" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_gcc'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.c <&5; (eval $ac_try) 2>&5; }; } | egrep yes >/dev/null 2>&1; then + ac_cv_prog_gcc=yes +else + ac_cv_prog_gcc=no +fi +fi + +echo "$ac_t""$ac_cv_prog_gcc" 1>&6 + +if test $ac_cv_prog_gcc = yes; then + GCC=yes +else + GCC= +fi + +ac_test_CFLAGS="${CFLAGS+set}" +ac_save_CFLAGS="$CFLAGS" +CFLAGS= +echo $ac_n "checking whether ${CC-cc} accepts -g""... $ac_c" 1>&6 +echo "configure:1154: checking whether ${CC-cc} accepts -g" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_cc_g'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + echo 'void f(){}' > conftest.c +if test -z "`${CC-cc} -g -c conftest.c 2>&1`"; then + ac_cv_prog_cc_g=yes +else + ac_cv_prog_cc_g=no +fi +rm -f conftest* + +fi + +echo "$ac_t""$ac_cv_prog_cc_g" 1>&6 +if test "$ac_test_CFLAGS" = set; then + CFLAGS="$ac_save_CFLAGS" +elif test $ac_cv_prog_cc_g = yes; then + if test "$GCC" = yes; then + CFLAGS="-g -O2" + else + CFLAGS="-g" + fi +else + if test "$GCC" = yes; then + CFLAGS="-O2" + else + CFLAGS= + fi +fi + +echo $ac_n "checking how to run the C preprocessor""... $ac_c" 1>&6 +echo "configure:1186: checking how to run the C preprocessor" >&5 +# On Suns, sometimes $CPP names a directory. +if test -n "$CPP" && test -d "$CPP"; then + CPP= +fi +if test -z "$CPP"; then +if eval "test \"`echo '$''{'ac_cv_prog_CPP'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + # This must be in double quotes, not single quotes, because CPP may get + # substituted into the Makefile and "${CC-cc}" will confuse make. + CPP="${CC-cc} -E" + # On the NeXT, cc -E runs the code through the compiler's parser, + # not just through cpp. + cat > conftest.$ac_ext < +Syntax Error +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:1207: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + : +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + CPP="${CC-cc} -E -traditional-cpp" + cat > conftest.$ac_ext < +Syntax Error +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:1224: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + : +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + CPP="${CC-cc} -nologo -E" + cat > conftest.$ac_ext < +Syntax Error +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:1241: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + : +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + CPP=/lib/cpp +fi +rm -f conftest* +fi +rm -f conftest* +fi +rm -f conftest* + ac_cv_prog_CPP="$CPP" +fi + CPP="$ac_cv_prog_CPP" +else + ac_cv_prog_CPP="$CPP" +fi +echo "$ac_t""$CPP" 1>&6 + +echo $ac_n "checking for AIX""... $ac_c" 1>&6 +echo "configure:1266: checking for AIX" >&5 +cat > conftest.$ac_ext <&5 | + egrep "yes" >/dev/null 2>&1; then + rm -rf conftest* + echo "$ac_t""yes" 1>&6; cat >> confdefs.h <<\EOF +#define _ALL_SOURCE 1 +EOF + +else + rm -rf conftest* + echo "$ac_t""no" 1>&6 +fi +rm -f conftest* + + +echo $ac_n "checking for POSIXized ISC""... $ac_c" 1>&6 +echo "configure:1290: checking for POSIXized ISC" >&5 +if test -d /etc/conf/kconfig.d && + grep _POSIX_VERSION /usr/include/sys/unistd.h >/dev/null 2>&1 +then + echo "$ac_t""yes" 1>&6 + ISC=yes # If later tests want to check for ISC. + cat >> confdefs.h <<\EOF +#define _POSIX_SOURCE 1 +EOF + + if test "$GCC" = yes; then + CC="$CC -posix" + else + CC="$CC -Xp" + fi +else + echo "$ac_t""no" 1>&6 + ISC= +fi + +echo $ac_n "checking for object suffix""... $ac_c" 1>&6 +echo "configure:1311: checking for object suffix" >&5 +if eval "test \"`echo '$''{'ac_cv_objext'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + rm -f conftest* +echo 'int i = 1;' > conftest.$ac_ext +if { (eval echo configure:1317: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + for ac_file in conftest.*; do + case $ac_file in + *.c) ;; + *) ac_cv_objext=`echo $ac_file | sed -e s/conftest.//` ;; + esac + done +else + { echo "configure: error: installation or configuration problem; compiler does not work" 1>&2; exit 1; } +fi +rm -f conftest* +fi + +echo "$ac_t""$ac_cv_objext" 1>&6 +OBJEXT=$ac_cv_objext +ac_objext=$ac_cv_objext + + +echo $ac_n "checking if $CC supports -c -o file.$ac_objext""... $ac_c" 1>&6 +echo "configure:1336: checking if $CC supports -c -o file.$ac_objext" >&5 +if eval "test \"`echo '$''{'inn_cv_compiler_c_o'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + rm -f -r conftest 2>/dev/null +mkdir conftest +cd conftest +echo "int some_variable = 0;" > conftest.$ac_ext +mkdir out +# According to Tom Tromey, Ian Lance Taylor reported there are C compilers +# that will create temporary files in the current directory regardless of +# the output directory. Thus, making CWD read-only will cause this test +# to fail, enabling locking or at least warning the user not to do parallel +# builds. +chmod -w . +save_CFLAGS="$CFLAGS" +CFLAGS="$CFLAGS -o out/conftest2.$ac_objext" +compiler_c_o=no +if { (eval $ac_compile) 2> out/conftest.err; } \ + && test -s out/conftest2.$ac_objext; then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s out/conftest.err; then + inn_cv_compiler_c_o=no + else + inn_cv_compiler_c_o=yes + fi +else + # Append any errors to the config.log. + cat out/conftest.err 1>&5 + inn_cv_compiler_c_o=no +fi +CFLAGS="$save_CFLAGS" +chmod u+w . +rm -f conftest* out/* +rmdir out +cd .. +rmdir conftest +rm -f -r conftest 2>/dev/null +fi + +compiler_c_o=$inn_cv_compiler_c_o +echo "$ac_t""$compiler_c_o" 1>&6 + +inn_use_libtool=no +# Check whether --enable-libtool or --disable-libtool was given. +if test "${enable_libtool+set}" = set; then + enableval="$enable_libtool" + if test "$enableval" = yes ; then + inn_use_libtool=yes + fi +fi + +if test x"$inn_use_libtool" = xyes ; then + # Find the correct PATH separator. Usually this is `:', but +# DJGPP uses `;' like DOS. +if test "X${PATH_SEPARATOR+set}" != Xset; then + UNAME=${UNAME-`uname 2>/dev/null`} + case X$UNAME in + *-DOS) lt_cv_sys_path_separator=';' ;; + *) lt_cv_sys_path_separator=':' ;; + esac + PATH_SEPARATOR=$lt_cv_sys_path_separator +fi + +echo $ac_n "checking for Cygwin environment""... $ac_c" 1>&6 +echo "configure:1402: checking for Cygwin environment" >&5 +if eval "test \"`echo '$''{'ac_cv_cygwin'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext <&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + ac_cv_cygwin=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + ac_cv_cygwin=no +fi +rm -f conftest* +rm -f conftest* +fi + +echo "$ac_t""$ac_cv_cygwin" 1>&6 +CYGWIN= +test "$ac_cv_cygwin" = yes && CYGWIN=yes +echo $ac_n "checking for mingw32 environment""... $ac_c" 1>&6 +echo "configure:1435: checking for mingw32 environment" >&5 +if eval "test \"`echo '$''{'ac_cv_mingw32'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext <&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + ac_cv_mingw32=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + ac_cv_mingw32=no +fi +rm -f conftest* +rm -f conftest* +fi + +echo "$ac_t""$ac_cv_mingw32" 1>&6 +MINGW32= +test "$ac_cv_mingw32" = yes && MINGW32=yes +# Check whether --enable-shared or --disable-shared was given. +if test "${enable_shared+set}" = set; then + enableval="$enable_shared" + p=${PACKAGE-default} +case $enableval in +yes) enable_shared=yes ;; +no) enable_shared=no ;; +*) + enable_shared=no + # Look at the argument we got. We use all the common list separators. + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:," + for pkg in $enableval; do + if test "X$pkg" = "X$p"; then + enable_shared=yes + fi + done + IFS="$ac_save_ifs" + ;; +esac +else + enable_shared=yes +fi + +# Check whether --enable-static or --disable-static was given. +if test "${enable_static+set}" = set; then + enableval="$enable_static" + p=${PACKAGE-default} +case $enableval in +yes) enable_static=yes ;; +no) enable_static=no ;; +*) + enable_static=no + # Look at the argument we got. We use all the common list separators. + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:," + for pkg in $enableval; do + if test "X$pkg" = "X$p"; then + enable_static=yes + fi + done + IFS="$ac_save_ifs" + ;; +esac +else + enable_static=yes +fi + +# Check whether --enable-fast-install or --disable-fast-install was given. +if test "${enable_fast_install+set}" = set; then + enableval="$enable_fast_install" + p=${PACKAGE-default} +case $enableval in +yes) enable_fast_install=yes ;; +no) enable_fast_install=no ;; +*) + enable_fast_install=no + # Look at the argument we got. We use all the common list separators. + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS="${IFS}:," + for pkg in $enableval; do + if test "X$pkg" = "X$p"; then + enable_fast_install=yes + fi + done + IFS="$ac_save_ifs" + ;; +esac +else + enable_fast_install=yes +fi + + +# Make sure we can run config.sub. +if ${CONFIG_SHELL-/bin/sh} $ac_config_sub sun4 >/dev/null 2>&1; then : +else { echo "configure: error: can not run $ac_config_sub" 1>&2; exit 1; } +fi + +echo $ac_n "checking host system type""... $ac_c" 1>&6 +echo "configure:1539: checking host system type" >&5 + +host_alias=$host +case "$host_alias" in +NONE) + case $nonopt in + NONE) + if host_alias=`${CONFIG_SHELL-/bin/sh} $ac_config_guess`; then : + else { echo "configure: error: can not guess host type; you must specify one" 1>&2; exit 1; } + fi ;; + *) host_alias=$nonopt ;; + esac ;; +esac + +host=`${CONFIG_SHELL-/bin/sh} $ac_config_sub $host_alias` +host_cpu=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\1/'` +host_vendor=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\2/'` +host_os=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'` +echo "$ac_t""$host" 1>&6 + +echo $ac_n "checking build system type""... $ac_c" 1>&6 +echo "configure:1560: checking build system type" >&5 + +build_alias=$build +case "$build_alias" in +NONE) + case $nonopt in + NONE) build_alias=$host_alias ;; + *) build_alias=$nonopt ;; + esac ;; +esac + +build=`${CONFIG_SHELL-/bin/sh} $ac_config_sub $build_alias` +build_cpu=`echo $build | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\1/'` +build_vendor=`echo $build | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\2/'` +build_os=`echo $build | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'` +echo "$ac_t""$build" 1>&6 + +# Check whether --with-gnu-ld or --without-gnu-ld was given. +if test "${with_gnu_ld+set}" = set; then + withval="$with_gnu_ld" + test "$withval" = no || with_gnu_ld=yes +else + with_gnu_ld=no +fi + +ac_prog=ld +if test "$GCC" = yes; then + # Check if gcc -print-prog-name=ld gives a path. + echo $ac_n "checking for ld used by GCC""... $ac_c" 1>&6 +echo "configure:1589: checking for ld used by GCC" >&5 + case $host in + *-*-mingw*) + # gcc leaves a trailing carriage return which upsets mingw + ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; + *) + ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; + esac + case $ac_prog in + # Accept absolute paths. + [\\/]* | [A-Za-z]:[\\/]*) + re_direlt='/[^/][^/]*/\.\./' + # Canonicalize the path of ld + ac_prog=`echo $ac_prog| sed 's%\\\\%/%g'` + while echo $ac_prog | grep "$re_direlt" > /dev/null 2>&1; do + ac_prog=`echo $ac_prog| sed "s%$re_direlt%/%"` + done + test -z "$LD" && LD="$ac_prog" + ;; + "") + # If it fails, then pretend we aren't using GCC. + ac_prog=ld + ;; + *) + # If it is relative, then search for the first ld in PATH. + with_gnu_ld=unknown + ;; + esac +elif test "$with_gnu_ld" = yes; then + echo $ac_n "checking for GNU ld""... $ac_c" 1>&6 +echo "configure:1619: checking for GNU ld" >&5 +else + echo $ac_n "checking for non-GNU ld""... $ac_c" 1>&6 +echo "configure:1622: checking for non-GNU ld" >&5 +fi +if eval "test \"`echo '$''{'lt_cv_path_LD'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -z "$LD"; then + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for ac_dir in $PATH; do + test -z "$ac_dir" && ac_dir=. + if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then + lt_cv_path_LD="$ac_dir/$ac_prog" + # Check to see if the program is GNU ld. I'd rather use --version, + # but apparently some GNU ld's only accept -v. + # Break only if it was the GNU/non-GNU ld that we prefer. + if "$lt_cv_path_LD" -v 2>&1 < /dev/null | egrep '(GNU|with BFD)' > /dev/null; then + test "$with_gnu_ld" != no && break + else + test "$with_gnu_ld" != yes && break + fi + fi + done + IFS="$ac_save_ifs" +else + lt_cv_path_LD="$LD" # Let the user override the test with a path. +fi +fi + +LD="$lt_cv_path_LD" +if test -n "$LD"; then + echo "$ac_t""$LD" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi +test -z "$LD" && { echo "configure: error: no acceptable ld found in \$PATH" 1>&2; exit 1; } +echo $ac_n "checking if the linker ($LD) is GNU ld""... $ac_c" 1>&6 +echo "configure:1657: checking if the linker ($LD) is GNU ld" >&5 +if eval "test \"`echo '$''{'lt_cv_prog_gnu_ld'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + # I'd rather use --version here, but apparently some GNU ld's only accept -v. +if $LD -v 2>&1 &5; then + lt_cv_prog_gnu_ld=yes +else + lt_cv_prog_gnu_ld=no +fi +fi + +echo "$ac_t""$lt_cv_prog_gnu_ld" 1>&6 +with_gnu_ld=$lt_cv_prog_gnu_ld + + +echo $ac_n "checking for $LD option to reload object files""... $ac_c" 1>&6 +echo "configure:1674: checking for $LD option to reload object files" >&5 +if eval "test \"`echo '$''{'lt_cv_ld_reload_flag'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + lt_cv_ld_reload_flag='-r' +fi + +echo "$ac_t""$lt_cv_ld_reload_flag" 1>&6 +reload_flag=$lt_cv_ld_reload_flag +test -n "$reload_flag" && reload_flag=" $reload_flag" + +echo $ac_n "checking for BSD-compatible nm""... $ac_c" 1>&6 +echo "configure:1686: checking for BSD-compatible nm" >&5 +if eval "test \"`echo '$''{'lt_cv_path_NM'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$NM"; then + # Let the user override the test. + lt_cv_path_NM="$NM" +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + for ac_dir in $PATH /usr/ccs/bin /usr/ucb /bin; do + test -z "$ac_dir" && ac_dir=. + tmp_nm=$ac_dir/${ac_tool_prefix}nm + if test -f $tmp_nm || test -f $tmp_nm$ac_exeext ; then + # Check to see if the nm accepts a BSD-compat flag. + # Adding the `sed 1q' prevents false positives on HP-UX, which says: + # nm: unknown option "B" ignored + # Tru64's nm complains that /dev/null is an invalid object file + if ($tmp_nm -B /dev/null 2>&1 | sed '1q'; exit 0) | egrep '(/dev/null|Invalid file or object type)' >/dev/null; then + lt_cv_path_NM="$tmp_nm -B" + break + elif ($tmp_nm -p /dev/null 2>&1 | sed '1q'; exit 0) | egrep /dev/null >/dev/null; then + lt_cv_path_NM="$tmp_nm -p" + break + else + lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but + continue # so that we can try to find one that supports BSD flags + fi + fi + done + IFS="$ac_save_ifs" + test -z "$lt_cv_path_NM" && lt_cv_path_NM=nm +fi +fi + +NM="$lt_cv_path_NM" +echo "$ac_t""$NM" 1>&6 + +echo $ac_n "checking whether ln -s works""... $ac_c" 1>&6 +echo "configure:1724: checking whether ln -s works" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_LN_S'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + rm -f conftestdata +if ln -s X conftestdata 2>/dev/null +then + rm -f conftestdata + ac_cv_prog_LN_S="ln -s" +else + ac_cv_prog_LN_S=ln +fi +fi +LN_S="$ac_cv_prog_LN_S" +if test "$ac_cv_prog_LN_S" = "ln -s"; then + echo "$ac_t""yes" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +echo $ac_n "checking how to recognise dependant libraries""... $ac_c" 1>&6 +echo "configure:1745: checking how to recognise dependant libraries" >&5 +if eval "test \"`echo '$''{'lt_cv_deplibs_check_method'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + lt_cv_file_magic_cmd='$MAGIC_CMD' +lt_cv_file_magic_test_file= +lt_cv_deplibs_check_method='unknown' +# Need to set the preceding variable on all platforms that support +# interlibrary dependencies. +# 'none' -- dependencies not supported. +# `unknown' -- same as none, but documents that we really don't know. +# 'pass_all' -- all dependencies passed with no checks. +# 'test_compile' -- check by making test program. +# 'file_magic [[regex]]' -- check by looking for files in library path +# which responds to the $file_magic_cmd with a given egrep regex. +# If you have `file' or equivalent on your system and you're not sure +# whether `pass_all' will *always* work, you probably want this one. + +case $host_os in +aix4* | aix5*) + lt_cv_deplibs_check_method=pass_all + ;; + +beos*) + lt_cv_deplibs_check_method=pass_all + ;; + +bsdi4*) + lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib)' + lt_cv_file_magic_cmd='/usr/bin/file -L' + lt_cv_file_magic_test_file=/shlib/libc.so + ;; + +cygwin* | mingw* | pw32*) + lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?' + lt_cv_file_magic_cmd='$OBJDUMP -f' + ;; + +darwin* | rhapsody*) + lt_cv_deplibs_check_method='file_magic Mach-O dynamically linked shared library' + lt_cv_file_magic_cmd='/usr/bin/file -L' + case "$host_os" in + rhapsody* | darwin1.[012]) + lt_cv_file_magic_test_file=`echo /System/Library/Frameworks/System.framework/Versions/*/System | head -1` + ;; + *) # Darwin 1.3 on + lt_cv_file_magic_test_file='/usr/lib/libSystem.dylib' + ;; + esac + ;; + +freebsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then + case $host_cpu in + i*86 ) + # Not sure whether the presence of OpenBSD here was a mistake. + # Let's accept both of them until this is cleared up. + lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD)/i[3-9]86 (compact )?demand paged shared library' + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` + ;; + esac + else + lt_cv_deplibs_check_method=pass_all + fi + ;; + +gnu*) + lt_cv_deplibs_check_method=pass_all + ;; + +hpux10.20*|hpux11*) + lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|PA-RISC[0-9].[0-9]) shared library' + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=/usr/lib/libc.sl + ;; + +irix5* | irix6*) + case $host_os in + irix5*) + # this will be overridden with pass_all, but let us keep it just in case + lt_cv_deplibs_check_method="file_magic ELF 32-bit MSB dynamic lib MIPS - version 1" + ;; + *) + case $LD in + *-32|*"-32 ") libmagic=32-bit;; + *-n32|*"-n32 ") libmagic=N32;; + *-64|*"-64 ") libmagic=64-bit;; + *) libmagic=never-match;; + esac + # this will be overridden with pass_all, but let us keep it just in case + lt_cv_deplibs_check_method="file_magic ELF ${libmagic} MSB mips-[1234] dynamic lib MIPS - version 1" + ;; + esac + lt_cv_file_magic_test_file=`echo /lib${libsuff}/libc.so*` + lt_cv_deplibs_check_method=pass_all + ;; + +# This must be Linux ELF. +linux-gnu*) + case $host_cpu in + alpha* | hppa* | i*86 | powerpc* | sparc* | ia64* ) + lt_cv_deplibs_check_method=pass_all ;; + *) + # glibc up to 2.1.1 does not perform some relocations on ARM + lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB (shared object|dynamic lib )' ;; + esac + lt_cv_file_magic_test_file=`echo /lib/libc.so* /lib/libc-*.so` + ;; + +netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ > /dev/null; then + lt_cv_deplibs_check_method='match_pattern /lib[^/\.]+\.so\.[0-9]+\.[0-9]+$' + else + lt_cv_deplibs_check_method='match_pattern /lib[^/\.]+\.so$' + fi + ;; + +newos6*) + lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (executable|dynamic lib)' + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=/usr/lib/libnls.so + ;; + +openbsd*) + lt_cv_file_magic_cmd=/usr/bin/file + lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB shared object' + else + lt_cv_deplibs_check_method='file_magic OpenBSD.* shared library' + fi + ;; + +osf3* | osf4* | osf5*) + # this will be overridden with pass_all, but let us keep it just in case + lt_cv_deplibs_check_method='file_magic COFF format alpha shared library' + lt_cv_file_magic_test_file=/shlib/libc.so + lt_cv_deplibs_check_method=pass_all + ;; + +sco3.2v5*) + lt_cv_deplibs_check_method=pass_all + ;; + +solaris*) + lt_cv_deplibs_check_method=pass_all + lt_cv_file_magic_test_file=/lib/libc.so + ;; + +sysv5uw[78]* | sysv4*uw2*) + lt_cv_deplibs_check_method=pass_all + ;; + +sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) + case $host_vendor in + motorola) + lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib) M[0-9][0-9]* Version [0-9]' + lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` + ;; + ncr) + lt_cv_deplibs_check_method=pass_all + ;; + sequent) + lt_cv_file_magic_cmd='/bin/file' + lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB (shared object|dynamic lib )' + ;; + sni) + lt_cv_file_magic_cmd='/bin/file' + lt_cv_deplibs_check_method="file_magic ELF [0-9][0-9]*-bit [LM]SB dynamic lib" + lt_cv_file_magic_test_file=/lib/libc.so + ;; + esac + ;; +esac + +fi + +echo "$ac_t""$lt_cv_deplibs_check_method" 1>&6 +file_magic_cmd=$lt_cv_file_magic_cmd +deplibs_check_method=$lt_cv_deplibs_check_method + + + +echo $ac_n "checking for executable suffix""... $ac_c" 1>&6 +echo "configure:1930: checking for executable suffix" >&5 +if eval "test \"`echo '$''{'ac_cv_exeext'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$CYGWIN" = yes || test "$MINGW32" = yes; then + ac_cv_exeext=.exe +else + rm -f conftest* + echo 'int main () { return 0; }' > conftest.$ac_ext + ac_cv_exeext= + if { (eval echo configure:1940: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; }; then + for file in conftest.*; do + case $file in + *.$ac_ext | *.c | *.o | *.obj) ;; + *) ac_cv_exeext=`echo $file | sed -e s/conftest//` ;; + esac + done + else + { echo "configure: error: installation or configuration problem: compiler cannot create executables." 1>&2; exit 1; } + fi + rm -f conftest* + test x"${ac_cv_exeext}" = x && ac_cv_exeext=no +fi +fi + +EXEEXT="" +test x"${ac_cv_exeext}" != xno && EXEEXT=${ac_cv_exeext} +echo "$ac_t""${ac_cv_exeext}" 1>&6 +ac_exeext=$EXEEXT + +if test $host != $build; then + ac_tool_prefix=${host_alias}- +else + ac_tool_prefix= +fi + + + + +# Check for command to grab the raw symbol name followed by C symbol from nm. +echo $ac_n "checking command to parse $NM output""... $ac_c" 1>&6 +echo "configure:1971: checking command to parse $NM output" >&5 +if eval "test \"`echo '$''{'lt_cv_sys_global_symbol_pipe'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + +# These are sane defaults that work on at least a few old systems. +# [They come from Ultrix. What could be older than Ultrix?!! ;)] + +# Character class describing NM global symbol codes. +symcode='[BCDEGRST]' + +# Regexp to match symbols that can be accessed directly from C. +sympat='\([_A-Za-z][_A-Za-z0-9]*\)' + +# Transform the above into a raw symbol and a C symbol. +symxfrm='\1 \2\3 \3' + +# Transform an extracted symbol line into a proper C declaration +lt_cv_global_symbol_to_cdecl="sed -n -e 's/^. .* \(.*\)$/extern char \1;/p'" + +# Transform an extracted symbol line into symbol name and symbol address +lt_cv_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode \([^ ]*\) \([^ ]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'" + +# Define system-specific variables. +case $host_os in +aix*) + symcode='[BCDT]' + ;; +cygwin* | mingw* | pw32*) + symcode='[ABCDGISTW]' + ;; +hpux*) # Its linker distinguishes data from code symbols + lt_cv_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern char \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" + lt_cv_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/ {\\\"\1\\\", (lt_ptr) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/ {\"\2\", (lt_ptr) \&\2},/p'" + ;; +irix*) + symcode='[BCDEGRST]' + ;; +solaris* | sysv5*) + symcode='[BDT]' + ;; +sysv4) + symcode='[DFNSTU]' + ;; +esac + +# Handle CRLF in mingw tool chain +opt_cr= +case $host_os in +mingw*) + opt_cr=`echo 'x\{0,1\}' | tr x '\015'` # option cr in regexp + ;; +esac + +# If we're using GNU nm, then use its standard symbol codes. +if $NM -V 2>&1 | egrep '(GNU|with BFD)' > /dev/null; then + symcode='[ABCDGISTW]' +fi + +# Try without a prefix undercore, then with it. +for ac_symprfx in "" "_"; do + + # Write the raw and C identifiers. +lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[ ]\($symcode$symcode*\)[ ][ ]*\($ac_symprfx\)$sympat$opt_cr$/$symxfrm/p'" + + # Check to see that the pipe works correctly. + pipe_works=no + rm -f conftest* + cat > conftest.$ac_ext <&5; (eval $ac_compile) 2>&5; }; then + # Now try to grab the symbols. + nlist=conftest.nm + if { (eval echo configure:2054: \"$NM conftest.$ac_objext \| $lt_cv_sys_global_symbol_pipe \> $nlist\") 1>&5; (eval $NM conftest.$ac_objext \| $lt_cv_sys_global_symbol_pipe \> $nlist) 2>&5; } && test -s "$nlist"; then + # Try sorting and uniquifying the output. + if sort "$nlist" | uniq > "$nlist"T; then + mv -f "$nlist"T "$nlist" + else + rm -f "$nlist"T + fi + + # Make sure that we snagged all the symbols we need. + if egrep ' nm_test_var$' "$nlist" >/dev/null; then + if egrep ' nm_test_func$' "$nlist" >/dev/null; then + cat < conftest.$ac_ext +#ifdef __cplusplus +extern "C" { +#endif + +EOF + # Now generate the symbol file. + eval "$lt_cv_global_symbol_to_cdecl"' < "$nlist" >> conftest.$ac_ext' + + cat <> conftest.$ac_ext +#if defined (__STDC__) && __STDC__ +# define lt_ptr void * +#else +# define lt_ptr char * +# define const +#endif + +/* The mapping between symbol names and symbols. */ +const struct { + const char *name; + lt_ptr address; +} +lt_preloaded_symbols[] = +{ +EOF + sed "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (lt_ptr) \&\2},/" < "$nlist" >> conftest.$ac_ext + cat <<\EOF >> conftest.$ac_ext + {0, (lt_ptr) 0} +}; + +#ifdef __cplusplus +} +#endif +EOF + # Now try linking the two files. + mv conftest.$ac_objext conftstm.$ac_objext + save_LIBS="$LIBS" + save_CFLAGS="$CFLAGS" + LIBS="conftstm.$ac_objext" + CFLAGS="$CFLAGS$no_builtin_flag" + if { (eval echo configure:2105: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest; then + pipe_works=yes + fi + LIBS="$save_LIBS" + CFLAGS="$save_CFLAGS" + else + echo "cannot find nm_test_func in $nlist" >&5 + fi + else + echo "cannot find nm_test_var in $nlist" >&5 + fi + else + echo "cannot run $lt_cv_sys_global_symbol_pipe" >&5 + fi + else + echo "$progname: failed program was:" >&5 + cat conftest.$ac_ext >&5 + fi + rm -f conftest* conftst* + + # Do not use the global_symbol_pipe unless it works. + if test "$pipe_works" = yes; then + break + else + lt_cv_sys_global_symbol_pipe= + fi +done + +fi + +global_symbol_pipe="$lt_cv_sys_global_symbol_pipe" +if test -z "$lt_cv_sys_global_symbol_pipe"; then + global_symbol_to_cdecl= + global_symbol_to_c_name_address= +else + global_symbol_to_cdecl="$lt_cv_global_symbol_to_cdecl" + global_symbol_to_c_name_address="$lt_cv_global_symbol_to_c_name_address" +fi +if test -z "$global_symbol_pipe$global_symbol_to_cdec$global_symbol_to_c_name_address"; +then + echo "$ac_t""failed" 1>&6 +else + echo "$ac_t""ok" 1>&6 +fi + +for ac_hdr in dlfcn.h +do +ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'` +echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6 +echo "configure:2154: checking for $ac_hdr" >&5 +if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:2164: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + rm -rf conftest* + eval "ac_cv_header_$ac_safe=yes" +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_header_$ac_safe=no" +fi +rm -f conftest* +fi +if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'` + cat >> confdefs.h <&6 +fi +done + + + + + +# Only perform the check for file, if the check method requires it +case $deplibs_check_method in +file_magic*) + if test "$file_magic_cmd" = '$MAGIC_CMD'; then + echo $ac_n "checking for ${ac_tool_prefix}file""... $ac_c" 1>&6 +echo "configure:2199: checking for ${ac_tool_prefix}file" >&5 +if eval "test \"`echo '$''{'lt_cv_path_MAGIC_CMD'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case $MAGIC_CMD in + /*) + lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. + ;; + ?:/*) + lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a dos path. + ;; + *) + ac_save_MAGIC_CMD="$MAGIC_CMD" + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="/usr/bin:$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/${ac_tool_prefix}file; then + lt_cv_path_MAGIC_CMD="$ac_dir/${ac_tool_prefix}file" + if test -n "$file_magic_test_file"; then + case $deplibs_check_method in + "file_magic "*) + file_magic_regex="`expr \"$deplibs_check_method\" : \"file_magic \(.*\)\"`" + MAGIC_CMD="$lt_cv_path_MAGIC_CMD" + if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | + egrep "$file_magic_regex" > /dev/null; then + : + else + cat <&2 + +*** Warning: the command libtool uses to detect shared libraries, +*** $file_magic_cmd, produces output that libtool cannot recognize. +*** The result is that libtool may fail to recognize shared libraries +*** as such. This will affect the creation of libtool libraries that +*** depend on shared libraries, but programs linked with such libtool +*** libraries will work regardless of this problem. Nevertheless, you +*** may want to report the problem to your system manager and/or to +*** bug-libtool@gnu.org + +EOF + fi ;; + esac + fi + break + fi + done + IFS="$ac_save_ifs" + MAGIC_CMD="$ac_save_MAGIC_CMD" + ;; +esac +fi + +MAGIC_CMD="$lt_cv_path_MAGIC_CMD" +if test -n "$MAGIC_CMD"; then + echo "$ac_t""$MAGIC_CMD" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test -z "$lt_cv_path_MAGIC_CMD"; then + if test -n "$ac_tool_prefix"; then + echo $ac_n "checking for file""... $ac_c" 1>&6 +echo "configure:2261: checking for file" >&5 +if eval "test \"`echo '$''{'lt_cv_path_MAGIC_CMD'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case $MAGIC_CMD in + /*) + lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. + ;; + ?:/*) + lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a dos path. + ;; + *) + ac_save_MAGIC_CMD="$MAGIC_CMD" + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="/usr/bin:$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/file; then + lt_cv_path_MAGIC_CMD="$ac_dir/file" + if test -n "$file_magic_test_file"; then + case $deplibs_check_method in + "file_magic "*) + file_magic_regex="`expr \"$deplibs_check_method\" : \"file_magic \(.*\)\"`" + MAGIC_CMD="$lt_cv_path_MAGIC_CMD" + if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | + egrep "$file_magic_regex" > /dev/null; then + : + else + cat <&2 + +*** Warning: the command libtool uses to detect shared libraries, +*** $file_magic_cmd, produces output that libtool cannot recognize. +*** The result is that libtool may fail to recognize shared libraries +*** as such. This will affect the creation of libtool libraries that +*** depend on shared libraries, but programs linked with such libtool +*** libraries will work regardless of this problem. Nevertheless, you +*** may want to report the problem to your system manager and/or to +*** bug-libtool@gnu.org + +EOF + fi ;; + esac + fi + break + fi + done + IFS="$ac_save_ifs" + MAGIC_CMD="$ac_save_MAGIC_CMD" + ;; +esac +fi + +MAGIC_CMD="$lt_cv_path_MAGIC_CMD" +if test -n "$MAGIC_CMD"; then + echo "$ac_t""$MAGIC_CMD" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + + else + MAGIC_CMD=: + fi +fi + + fi + ;; +esac + +# Extract the first word of "${ac_tool_prefix}ranlib", so it can be a program name with args. +set dummy ${ac_tool_prefix}ranlib; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:2332: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_RANLIB'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$RANLIB"; then + ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test. +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_prog_RANLIB="${ac_tool_prefix}ranlib" + break + fi + done + IFS="$ac_save_ifs" +fi +fi +RANLIB="$ac_cv_prog_RANLIB" +if test -n "$RANLIB"; then + echo "$ac_t""$RANLIB" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + + +if test -z "$ac_cv_prog_RANLIB"; then +if test -n "$ac_tool_prefix"; then + # Extract the first word of "ranlib", so it can be a program name with args. +set dummy ranlib; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:2364: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_RANLIB'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$RANLIB"; then + ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test. +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_prog_RANLIB="ranlib" + break + fi + done + IFS="$ac_save_ifs" + test -z "$ac_cv_prog_RANLIB" && ac_cv_prog_RANLIB=":" +fi +fi +RANLIB="$ac_cv_prog_RANLIB" +if test -n "$RANLIB"; then + echo "$ac_t""$RANLIB" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +else + RANLIB=":" +fi +fi + +# Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. +set dummy ${ac_tool_prefix}strip; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:2399: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_STRIP'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$STRIP"; then + ac_cv_prog_STRIP="$STRIP" # Let the user override the test. +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_prog_STRIP="${ac_tool_prefix}strip" + break + fi + done + IFS="$ac_save_ifs" +fi +fi +STRIP="$ac_cv_prog_STRIP" +if test -n "$STRIP"; then + echo "$ac_t""$STRIP" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + + +if test -z "$ac_cv_prog_STRIP"; then +if test -n "$ac_tool_prefix"; then + # Extract the first word of "strip", so it can be a program name with args. +set dummy strip; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:2431: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_STRIP'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$STRIP"; then + ac_cv_prog_STRIP="$STRIP" # Let the user override the test. +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_prog_STRIP="strip" + break + fi + done + IFS="$ac_save_ifs" + test -z "$ac_cv_prog_STRIP" && ac_cv_prog_STRIP=":" +fi +fi +STRIP="$ac_cv_prog_STRIP" +if test -n "$STRIP"; then + echo "$ac_t""$STRIP" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +else + STRIP=":" +fi +fi + + +enable_dlopen=no +enable_win32_dll=no + +# Check whether --enable-libtool-lock or --disable-libtool-lock was given. +if test "${enable_libtool_lock+set}" = set; then + enableval="$enable_libtool_lock" + : +fi + +test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes + +# Some flags need to be propagated to the compiler or linker for good +# libtool support. +case $host in +*-*-irix6*) + # Find out which ABI we are using. + echo '#line 2480 "configure"' > conftest.$ac_ext + if { (eval echo configure:2481: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + case `/usr/bin/file conftest.$ac_objext` in + *32-bit*) + LD="${LD-ld} -32" + ;; + *N32*) + LD="${LD-ld} -n32" + ;; + *64-bit*) + LD="${LD-ld} -64" + ;; + esac + fi + rm -rf conftest* + ;; + +*-*-sco3.2v5*) + # On SCO OpenServer 5, we need -belf to get full-featured binaries. + SAVE_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS -belf" + echo $ac_n "checking whether the C compiler needs -belf""... $ac_c" 1>&6 +echo "configure:2502: checking whether the C compiler needs -belf" >&5 +if eval "test \"`echo '$''{'lt_cv_cc_needs_belf'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + + ac_ext=c +# CFLAGS is not in ac_cpp because -g, -O, etc. are not valid cpp options. +ac_cpp='$CPP $CPPFLAGS' +ac_compile='${CC-cc} -c $CFLAGS $CPPFLAGS conftest.$ac_ext 1>&5' +ac_link='${CC-cc} -o conftest${ac_exeext} $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS 1>&5' +cross_compiling=$ac_cv_prog_cc_cross + + cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + lt_cv_cc_needs_belf=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + lt_cv_cc_needs_belf=no +fi +rm -f conftest* + ac_ext=c +# CFLAGS is not in ac_cpp because -g, -O, etc. are not valid cpp options. +ac_cpp='$CPP $CPPFLAGS' +ac_compile='${CC-cc} -c $CFLAGS $CPPFLAGS conftest.$ac_ext 1>&5' +ac_link='${CC-cc} -o conftest${ac_exeext} $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS 1>&5' +cross_compiling=$ac_cv_prog_cc_cross + +fi + +echo "$ac_t""$lt_cv_cc_needs_belf" 1>&6 + if test x"$lt_cv_cc_needs_belf" != x"yes"; then + # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf + CFLAGS="$SAVE_CFLAGS" + fi + ;; + + +esac + +# Sed substitution that helps us do robust quoting. It backslashifies +# metacharacters that are still active within double-quoted strings. +Xsed='sed -e s/^X//' +sed_quote_subst='s/\([\\"\\`$\\\\]\)/\\\1/g' + +# Same as above, but do not quote variable references. +double_quote_subst='s/\([\\"\\`\\\\]\)/\\\1/g' + +# Sed substitution to delay expansion of an escaped shell variable in a +# double_quote_subst'ed string. +delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' + +# Constants: +rm="rm -f" + +# Global variables: +default_ofile=libtool +can_build_shared=yes + +# All known linkers require a `.a' archive for static linking (except M$VC, +# which needs '.lib'). +libext=a +ltmain="$ac_aux_dir/ltmain.sh" +ofile="$default_ofile" +with_gnu_ld="$lt_cv_prog_gnu_ld" +need_locks="$enable_libtool_lock" + +old_CC="$CC" +old_CFLAGS="$CFLAGS" + +# Set sane defaults for various variables +test -z "$AR" && AR=ar +test -z "$AR_FLAGS" && AR_FLAGS=cru +test -z "$AS" && AS=as +test -z "$CC" && CC=cc +test -z "$DLLTOOL" && DLLTOOL=dlltool +test -z "$LD" && LD=ld +test -z "$LN_S" && LN_S="ln -s" +test -z "$MAGIC_CMD" && MAGIC_CMD=file +test -z "$NM" && NM=nm +test -z "$OBJDUMP" && OBJDUMP=objdump +test -z "$RANLIB" && RANLIB=: +test -z "$STRIP" && STRIP=: +test -z "$ac_objext" && ac_objext=o + +if test x"$host" != x"$build"; then + ac_tool_prefix=${host_alias}- +else + ac_tool_prefix= +fi + +# Transform linux* to *-*-linux-gnu*, to support old configure scripts. +case $host_os in +linux-gnu*) ;; +linux*) host=`echo $host | sed 's/^\(.*-.*-linux\)\(.*\)$/\1-gnu\2/'` +esac + +case $host_os in +aix3*) + # AIX sometimes has problems with the GCC collect2 program. For some + # reason, if we set the COLLECT_NAMES environment variable, the problems + # vanish in a puff of smoke. + if test "X${COLLECT_NAMES+set}" != Xset; then + COLLECT_NAMES= + export COLLECT_NAMES + fi + ;; +esac + +# Determine commands to create old-style static archives. +old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs' +old_postinstall_cmds='chmod 644 $oldlib' +old_postuninstall_cmds= + +if test -n "$RANLIB"; then + case $host_os in + openbsd*) + old_postinstall_cmds="\$RANLIB -t \$oldlib~$old_postinstall_cmds" + ;; + *) + old_postinstall_cmds="\$RANLIB \$oldlib~$old_postinstall_cmds" + ;; + esac + old_archive_cmds="$old_archive_cmds~\$RANLIB \$oldlib" +fi + +# Allow CC to be a program name with arguments. +set dummy $CC +compiler="$2" + +## FIXME: this should be a separate macro +## +echo $ac_n "checking for objdir""... $ac_c" 1>&6 +echo "configure:2644: checking for objdir" >&5 +rm -f .libs 2>/dev/null +mkdir .libs 2>/dev/null +if test -d .libs; then + objdir=.libs +else + # MS-DOS does not allow filenames that begin with a dot. + objdir=_libs +fi +rmdir .libs 2>/dev/null +echo "$ac_t""$objdir" 1>&6 +## +## END FIXME + + +## FIXME: this should be a separate macro +## +# Check whether --with-pic or --without-pic was given. +if test "${with_pic+set}" = set; then + withval="$with_pic" + pic_mode="$withval" +else + pic_mode=default +fi + +test -z "$pic_mode" && pic_mode=default + +# We assume here that the value for lt_cv_prog_cc_pic will not be cached +# in isolation, and that seeing it set (from the cache) indicates that +# the associated values are set (in the cache) correctly too. +echo $ac_n "checking for $compiler option to produce PIC""... $ac_c" 1>&6 +echo "configure:2675: checking for $compiler option to produce PIC" >&5 +if eval "test \"`echo '$''{'lt_cv_prog_cc_pic'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + lt_cv_prog_cc_pic= + lt_cv_prog_cc_shlib= + lt_cv_prog_cc_wl= + lt_cv_prog_cc_static= + lt_cv_prog_cc_no_builtin= + lt_cv_prog_cc_can_build_shared=$can_build_shared + + if test "$GCC" = yes; then + lt_cv_prog_cc_wl='-Wl,' + lt_cv_prog_cc_static='-static' + + case $host_os in + aix*) + # Below there is a dirty hack to force normal static linking with -ldl + # The problem is because libdl dynamically linked with both libc and + # libC (AIX C++ library), which obviously doesn't included in libraries + # list by gcc. This cause undefined symbols with -static flags. + # This hack allows C programs to be linked with "-static -ldl", but + # not sure about C++ programs. + lt_cv_prog_cc_static="$lt_cv_prog_cc_static ${lt_cv_prog_cc_wl}-lC" + ;; + amigaos*) + # FIXME: we need at least 68020 code to build shared libraries, but + # adding the `-m68020' flag to GCC prevents building anything better, + # like `-m68040'. + lt_cv_prog_cc_pic='-m68020 -resident32 -malways-restore-a4' + ;; + beos* | irix5* | irix6* | osf3* | osf4* | osf5*) + # PIC is the default for these OSes. + ;; + darwin* | rhapsody*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + lt_cv_prog_cc_pic='-fno-common' + ;; + cygwin* | mingw* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + lt_cv_prog_cc_pic='-DDLL_EXPORT' + ;; + sysv4*MP*) + if test -d /usr/nec; then + lt_cv_prog_cc_pic=-Kconform_pic + fi + ;; + *) + lt_cv_prog_cc_pic='-fPIC' + ;; + esac + else + # PORTME Check for PIC flags for the system compiler. + case $host_os in + aix3* | aix4* | aix5*) + lt_cv_prog_cc_wl='-Wl,' + # All AIX code is PIC. + if test "$host_cpu" = ia64; then + # AIX 5 now supports IA64 processor + lt_cv_prog_cc_static='-Bstatic' + else + lt_cv_prog_cc_static='-bnso -bI:/lib/syscalls.exp' + fi + ;; + + hpux9* | hpux10* | hpux11*) + # Is there a better lt_cv_prog_cc_static that works with the bundled CC? + lt_cv_prog_cc_wl='-Wl,' + lt_cv_prog_cc_static="${lt_cv_prog_cc_wl}-a ${lt_cv_prog_cc_wl}archive" + lt_cv_prog_cc_pic='+Z' + ;; + + irix5* | irix6*) + lt_cv_prog_cc_wl='-Wl,' + lt_cv_prog_cc_static='-non_shared' + # PIC (with -KPIC) is the default. + ;; + + cygwin* | mingw* | pw32* | os2*) + # This hack is so that the source file can tell whether it is being + # built for inclusion in a dll (and should export symbols for example). + lt_cv_prog_cc_pic='-DDLL_EXPORT' + ;; + + newsos6) + lt_cv_prog_cc_pic='-KPIC' + lt_cv_prog_cc_static='-Bstatic' + ;; + + osf3* | osf4* | osf5*) + # All OSF/1 code is PIC. + lt_cv_prog_cc_wl='-Wl,' + lt_cv_prog_cc_static='-non_shared' + ;; + + sco3.2v5*) + lt_cv_prog_cc_pic='-Kpic' + lt_cv_prog_cc_static='-dn' + lt_cv_prog_cc_shlib='-belf' + ;; + + solaris*) + lt_cv_prog_cc_pic='-KPIC' + lt_cv_prog_cc_static='-Bstatic' + lt_cv_prog_cc_wl='-Wl,' + ;; + + sunos4*) + lt_cv_prog_cc_pic='-PIC' + lt_cv_prog_cc_static='-Bstatic' + lt_cv_prog_cc_wl='-Qoption ld ' + ;; + + sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) + lt_cv_prog_cc_pic='-KPIC' + lt_cv_prog_cc_static='-Bstatic' + if test "x$host_vendor" = xsni; then + lt_cv_prog_cc_wl='-LD' + else + lt_cv_prog_cc_wl='-Wl,' + fi + ;; + + uts4*) + lt_cv_prog_cc_pic='-pic' + lt_cv_prog_cc_static='-Bstatic' + ;; + + sysv4*MP*) + if test -d /usr/nec ;then + lt_cv_prog_cc_pic='-Kconform_pic' + lt_cv_prog_cc_static='-Bstatic' + fi + ;; + + *) + lt_cv_prog_cc_can_build_shared=no + ;; + esac + fi + +fi + +if test -z "$lt_cv_prog_cc_pic"; then + echo "$ac_t""none" 1>&6 +else + echo "$ac_t""$lt_cv_prog_cc_pic" 1>&6 + + # Check to make sure the pic_flag actually works. + echo $ac_n "checking if $compiler PIC flag $lt_cv_prog_cc_pic works""... $ac_c" 1>&6 +echo "configure:2827: checking if $compiler PIC flag $lt_cv_prog_cc_pic works" >&5 + if eval "test \"`echo '$''{'lt_cv_prog_cc_pic_works'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + save_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS $lt_cv_prog_cc_pic -DPIC" + cat > conftest.$ac_ext <&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + case $host_os in + hpux9* | hpux10* | hpux11*) + # On HP-UX, both CC and GCC only warn that PIC is supported... then + # they create non-PIC objects. So, if there were any warnings, we + # assume that PIC is not supported. + if test -s conftest.err; then + lt_cv_prog_cc_pic_works=no + else + lt_cv_prog_cc_pic_works=yes + fi + ;; + *) + lt_cv_prog_cc_pic_works=yes + ;; + esac + +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + lt_cv_prog_cc_pic_works=no + +fi +rm -f conftest* + CFLAGS="$save_CFLAGS" + +fi + + + if test "X$lt_cv_prog_cc_pic_works" = Xno; then + lt_cv_prog_cc_pic= + lt_cv_prog_cc_can_build_shared=no + else + lt_cv_prog_cc_pic=" $lt_cv_prog_cc_pic" + fi + + echo "$ac_t""$lt_cv_prog_cc_pic_works" 1>&6 +fi +## +## END FIXME + +# Check for any special shared library compilation flags. +if test -n "$lt_cv_prog_cc_shlib"; then + echo "configure: warning: \`$CC' requires \`$lt_cv_prog_cc_shlib' to build shared libraries" 1>&2 + if echo "$old_CC $old_CFLAGS " | egrep -e "[ ]$lt_cv_prog_cc_shlib[ ]" >/dev/null; then : + else + echo "configure: warning: add \`$lt_cv_prog_cc_shlib' to the CC or CFLAGS env variable and reconfigure" 1>&2 + lt_cv_prog_cc_can_build_shared=no + fi +fi + +## FIXME: this should be a separate macro +## +echo $ac_n "checking if $compiler static flag $lt_cv_prog_cc_static works""... $ac_c" 1>&6 +echo "configure:2897: checking if $compiler static flag $lt_cv_prog_cc_static works" >&5 +if eval "test \"`echo '$''{'lt_cv_prog_cc_static_works'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + lt_cv_prog_cc_static_works=no + save_LDFLAGS="$LDFLAGS" + LDFLAGS="$LDFLAGS $lt_cv_prog_cc_static" + cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + lt_cv_prog_cc_static_works=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* + LDFLAGS="$save_LDFLAGS" + +fi + + +# Belt *and* braces to stop my trousers falling down: +test "X$lt_cv_prog_cc_static_works" = Xno && lt_cv_prog_cc_static= +echo "$ac_t""$lt_cv_prog_cc_static_works" 1>&6 + +pic_flag="$lt_cv_prog_cc_pic" +special_shlib_compile_flags="$lt_cv_prog_cc_shlib" +wl="$lt_cv_prog_cc_wl" +link_static_flag="$lt_cv_prog_cc_static" +no_builtin_flag="$lt_cv_prog_cc_no_builtin" +can_build_shared="$lt_cv_prog_cc_can_build_shared" +## +## END FIXME + + +## FIXME: this should be a separate macro +## +# Check to see if options -o and -c are simultaneously supported by compiler +echo $ac_n "checking if $compiler supports -c -o file.$ac_objext""... $ac_c" 1>&6 +echo "configure:2943: checking if $compiler supports -c -o file.$ac_objext" >&5 +if eval "test \"`echo '$''{'lt_cv_compiler_c_o'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + +$rm -r conftest 2>/dev/null +mkdir conftest +cd conftest +echo "int some_variable = 0;" > conftest.$ac_ext +mkdir out +# According to Tom Tromey, Ian Lance Taylor reported there are C compilers +# that will create temporary files in the current directory regardless of +# the output directory. Thus, making CWD read-only will cause this test +# to fail, enabling locking or at least warning the user not to do parallel +# builds. +chmod -w . +save_CFLAGS="$CFLAGS" +CFLAGS="$CFLAGS -o out/conftest2.$ac_objext" +compiler_c_o=no +if { (eval echo configure:2962: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>out/conftest.err; } && test -s out/conftest2.$ac_objext; then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s out/conftest.err; then + lt_cv_compiler_c_o=no + else + lt_cv_compiler_c_o=yes + fi +else + # Append any errors to the config.log. + cat out/conftest.err 1>&5 + lt_cv_compiler_c_o=no +fi +CFLAGS="$save_CFLAGS" +chmod u+w . +$rm conftest* out/* +rmdir out +cd .. +rmdir conftest +$rm -r conftest 2>/dev/null + +fi + +compiler_c_o=$lt_cv_compiler_c_o +echo "$ac_t""$compiler_c_o" 1>&6 + +if test x"$compiler_c_o" = x"yes"; then + # Check to see if we can write to a .lo + echo $ac_n "checking if $compiler supports -c -o file.lo""... $ac_c" 1>&6 +echo "configure:2991: checking if $compiler supports -c -o file.lo" >&5 + if eval "test \"`echo '$''{'lt_cv_compiler_o_lo'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + + lt_cv_compiler_o_lo=no + save_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS -c -o conftest.lo" + save_objext="$ac_objext" + ac_objext=lo + cat > conftest.$ac_ext <&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s conftest.err; then + lt_cv_compiler_o_lo=no + else + lt_cv_compiler_o_lo=yes + fi + +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* + ac_objext="$save_objext" + CFLAGS="$save_CFLAGS" + +fi + + compiler_o_lo=$lt_cv_compiler_o_lo + echo "$ac_t""$compiler_o_lo" 1>&6 +else + compiler_o_lo=no +fi +## +## END FIXME + +## FIXME: this should be a separate macro +## +# Check to see if we can do hard links to lock some files if needed +hard_links="nottested" +if test "$compiler_c_o" = no && test "$need_locks" != no; then + # do not overwrite the value of need_locks provided by the user + echo $ac_n "checking if we can lock with hard links""... $ac_c" 1>&6 +echo "configure:3044: checking if we can lock with hard links" >&5 + hard_links=yes + $rm conftest* + ln conftest.a conftest.b 2>/dev/null && hard_links=no + touch conftest.a + ln conftest.a conftest.b 2>&5 || hard_links=no + ln conftest.a conftest.b 2>/dev/null && hard_links=no + echo "$ac_t""$hard_links" 1>&6 + if test "$hard_links" = no; then + echo "configure: warning: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" 1>&2 + need_locks=warn + fi +else + need_locks=no +fi +## +## END FIXME + +## FIXME: this should be a separate macro +## +if test "$GCC" = yes; then + # Check to see if options -fno-rtti -fno-exceptions are supported by compiler + echo $ac_n "checking if $compiler supports -fno-rtti -fno-exceptions""... $ac_c" 1>&6 +echo "configure:3067: checking if $compiler supports -fno-rtti -fno-exceptions" >&5 + echo "int some_variable = 0;" > conftest.$ac_ext + save_CFLAGS="$CFLAGS" + CFLAGS="$CFLAGS -fno-rtti -fno-exceptions -c conftest.$ac_ext" + compiler_rtti_exceptions=no + cat > conftest.$ac_ext <&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s conftest.err; then + compiler_rtti_exceptions=no + else + compiler_rtti_exceptions=yes + fi + +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* + CFLAGS="$save_CFLAGS" + echo "$ac_t""$compiler_rtti_exceptions" 1>&6 + + if test "$compiler_rtti_exceptions" = "yes"; then + no_builtin_flag=' -fno-builtin -fno-rtti -fno-exceptions' + else + no_builtin_flag=' -fno-builtin' + fi +fi +## +## END FIXME + +## FIXME: this should be a separate macro +## +# See if the linker supports building shared libraries. +echo $ac_n "checking whether the linker ($LD) supports shared libraries""... $ac_c" 1>&6 +echo "configure:3111: checking whether the linker ($LD) supports shared libraries" >&5 + +allow_undefined_flag= +no_undefined_flag= +need_lib_prefix=unknown +need_version=unknown +# when you set need_version to no, make sure it does not cause -set_version +# flags to be left without arguments +archive_cmds= +archive_expsym_cmds= +old_archive_from_new_cmds= +old_archive_from_expsyms_cmds= +export_dynamic_flag_spec= +whole_archive_flag_spec= +thread_safe_flag_spec= +hardcode_into_libs=no +hardcode_libdir_flag_spec= +hardcode_libdir_separator= +hardcode_direct=no +hardcode_minus_L=no +hardcode_shlibpath_var=unsupported +runpath_var= +link_all_deplibs=unknown +always_export_symbols=no +export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | sed '\''s/.* //'\'' | sort | uniq > $export_symbols' +# include_expsyms should be a list of space-separated symbols to be *always* +# included in the symbol list +include_expsyms= +# exclude_expsyms can be an egrep regular expression of symbols to exclude +# it will be wrapped by ` (' and `)$', so one must not match beginning or +# end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', +# as well as any symbol that contains `d'. +exclude_expsyms="_GLOBAL_OFFSET_TABLE_" +# Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out +# platforms (ab)use it in PIC code, but their linkers get confused if +# the symbol is explicitly referenced. Since portable code cannot +# rely on this symbol name, it's probably fine to never include it in +# preloaded symbol tables. +extract_expsyms_cmds= + +case $host_os in +cygwin* | mingw* | pw32*) + # FIXME: the MSVC++ port hasn't been tested in a loooong time + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + if test "$GCC" != yes; then + with_gnu_ld=no + fi + ;; +openbsd*) + with_gnu_ld=no + ;; +esac + +ld_shlibs=yes +if test "$with_gnu_ld" = yes; then + # If archive_cmds runs LD, not CC, wlarc should be empty + wlarc='${wl}' + + # See if GNU ld supports shared libraries. + case $host_os in + aix3* | aix4* | aix5*) + # On AIX, the GNU linker is very broken + # Note:Check GNU linker on AIX 5-IA64 when/if it becomes available. + ld_shlibs=no + cat <&2 + +*** Warning: the GNU linker, at least up to release 2.9.1, is reported +*** to be unable to reliably create shared libraries on AIX. +*** Therefore, libtool is disabling shared libraries support. If you +*** really care for shared libraries, you may want to modify your PATH +*** so that a non-GNU linker is found, and then restart. + +EOF + ;; + + amigaos*) + archive_cmds='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_minus_L=yes + + # Samuel A. Falvo II reports + # that the semantics of dynamic libraries on AmigaOS, at least up + # to version 4, is to share data among multiple programs linked + # with the same dynamic library. Since this doesn't match the + # behavior of shared libraries on other platforms, we can use + # them. + ld_shlibs=no + ;; + + beos*) + if $LD --help 2>&1 | egrep ': supported targets:.* elf' > /dev/null; then + allow_undefined_flag=unsupported + # Joseph Beckenbach says some releases of gcc + # support --undefined. This deserves some investigation. FIXME + archive_cmds='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + else + ld_shlibs=no + fi + ;; + + cygwin* | mingw* | pw32*) + # hardcode_libdir_flag_spec is actually meaningless, as there is + # no search path for DLLs. + hardcode_libdir_flag_spec='-L$libdir' + allow_undefined_flag=unsupported + always_export_symbols=yes + + extract_expsyms_cmds='test -f $output_objdir/impgen.c || \ + sed -e "/^# \/\* impgen\.c starts here \*\//,/^# \/\* impgen.c ends here \*\// { s/^# //;s/^# *$//; p; }" -e d < $''0 > $output_objdir/impgen.c~ + test -f $output_objdir/impgen.exe || (cd $output_objdir && \ + if test "x$HOST_CC" != "x" ; then $HOST_CC -o impgen impgen.c ; \ + else $CC -o impgen impgen.c ; fi)~ + $output_objdir/impgen $dir/$soroot > $output_objdir/$soname-def' + + old_archive_from_expsyms_cmds='$DLLTOOL --as=$AS --dllname $soname --def $output_objdir/$soname-def --output-lib $output_objdir/$newlib' + + # cygwin and mingw dlls have different entry points and sets of symbols + # to exclude. + # FIXME: what about values for MSVC? + dll_entry=__cygwin_dll_entry@12 + dll_exclude_symbols=DllMain@12,_cygwin_dll_entry@12,_cygwin_noncygwin_dll_entry@12~ + case $host_os in + mingw*) + # mingw values + dll_entry=_DllMainCRTStartup@12 + dll_exclude_symbols=DllMain@12,DllMainCRTStartup@12,DllEntryPoint@12~ + ;; + esac + + # mingw and cygwin differ, and it's simplest to just exclude the union + # of the two symbol sets. + dll_exclude_symbols=DllMain@12,_cygwin_dll_entry@12,_cygwin_noncygwin_dll_entry@12,DllMainCRTStartup@12,DllEntryPoint@12 + + # recent cygwin and mingw systems supply a stub DllMain which the user + # can override, but on older systems we have to supply one (in ltdll.c) + if test "x$lt_cv_need_dllmain" = "xyes"; then + ltdll_obj='$output_objdir/$soname-ltdll.'"$ac_objext " + ltdll_cmds='test -f $output_objdir/$soname-ltdll.c || sed -e "/^# \/\* ltdll\.c starts here \*\//,/^# \/\* ltdll.c ends here \*\// { s/^# //; p; }" -e d < $''0 > $output_objdir/$soname-ltdll.c~ + test -f $output_objdir/$soname-ltdll.$ac_objext || (cd $output_objdir && $CC -c $soname-ltdll.c)~' + else + ltdll_obj= + ltdll_cmds= + fi + + # Extract the symbol export list from an `--export-all' def file, + # then regenerate the def file from the symbol export list, so that + # the compiled dll only exports the symbol export list. + # Be careful not to strip the DATA tag left be newer dlltools. + export_symbols_cmds="$ltdll_cmds"' + $DLLTOOL --export-all --exclude-symbols '$dll_exclude_symbols' --output-def $output_objdir/$soname-def '$ltdll_obj'$libobjs $convenience~ + sed -e "1,/EXPORTS/d" -e "s/ @ [0-9]*//" -e "s/ *;.*$//" < $output_objdir/$soname-def > $export_symbols' + + # If the export-symbols file already is a .def file (1st line + # is EXPORTS), use it as is. + # If DATA tags from a recent dlltool are present, honour them! + archive_expsym_cmds='if test "x`head -1 $export_symbols`" = xEXPORTS; then + cp $export_symbols $output_objdir/$soname-def; + else + echo EXPORTS > $output_objdir/$soname-def; + _lt_hint=1; + cat $export_symbols | while read symbol; do + set dummy \$symbol; + case \$# in + 2) echo " \$2 @ \$_lt_hint ; " >> $output_objdir/$soname-def;; + *) echo " \$2 @ \$_lt_hint \$3 ; " >> $output_objdir/$soname-def;; + esac; + _lt_hint=`expr 1 + \$_lt_hint`; + done; + fi~ + '"$ltdll_cmds"' + $CC -Wl,--base-file,$output_objdir/$soname-base '$lt_cv_cc_dll_switch' -Wl,-e,'$dll_entry' -o $output_objdir/$soname '$ltdll_obj'$libobjs $deplibs $compiler_flags~ + $DLLTOOL --as=$AS --dllname $soname --exclude-symbols '$dll_exclude_symbols' --def $output_objdir/$soname-def --base-file $output_objdir/$soname-base --output-exp $output_objdir/$soname-exp~ + $CC -Wl,--base-file,$output_objdir/$soname-base $output_objdir/$soname-exp '$lt_cv_cc_dll_switch' -Wl,-e,'$dll_entry' -o $output_objdir/$soname '$ltdll_obj'$libobjs $deplibs $compiler_flags~ + $DLLTOOL --as=$AS --dllname $soname --exclude-symbols '$dll_exclude_symbols' --def $output_objdir/$soname-def --base-file $output_objdir/$soname-base --output-exp $output_objdir/$soname-exp --output-lib $output_objdir/$libname.dll.a~ + $CC $output_objdir/$soname-exp '$lt_cv_cc_dll_switch' -Wl,-e,'$dll_entry' -o $output_objdir/$soname '$ltdll_obj'$libobjs $deplibs $compiler_flags' + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' + wlarc= + else + archive_cmds='$CC -shared -nodefaultlibs $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds='$CC -shared -nodefaultlibs $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + fi + ;; + + solaris* | sysv5*) + if $LD -v 2>&1 | egrep 'BFD 2\.8' > /dev/null; then + ld_shlibs=no + cat <&2 + +*** Warning: The releases 2.8.* of the GNU linker cannot reliably +*** create shared libraries on Solaris systems. Therefore, libtool +*** is disabling shared libraries support. We urge you to upgrade GNU +*** binutils to release 2.9.1 or newer. Another option is to modify +*** your PATH or compiler configuration so that the native linker is +*** used, and then restart. + +EOF + elif $LD --help 2>&1 | egrep ': supported targets:.* elf' > /dev/null; then + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + ld_shlibs=no + fi + ;; + + sunos4*) + archive_cmds='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' + wlarc= + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + *) + if $LD --help 2>&1 | egrep ': supported targets:.* elf' > /dev/null; then + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + else + ld_shlibs=no + fi + ;; + esac + + if test "$ld_shlibs" = yes; then + runpath_var=LD_RUN_PATH + hardcode_libdir_flag_spec='${wl}--rpath ${wl}$libdir' + export_dynamic_flag_spec='${wl}--export-dynamic' + case $host_os in + cygwin* | mingw* | pw32*) + # dlltool doesn't understand --whole-archive et. al. + whole_archive_flag_spec= + ;; + *) + # ancient GNU ld didn't support --whole-archive et. al. + if $LD --help 2>&1 | egrep 'no-whole-archive' > /dev/null; then + whole_archive_flag_spec="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + else + whole_archive_flag_spec= + fi + ;; + esac + fi +else + # PORTME fill in a description of your system's linker (not GNU ld) + case $host_os in + aix3*) + allow_undefined_flag=unsupported + always_export_symbols=yes + archive_expsym_cmds='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' + # Note: this linker hardcodes the directories in LIBPATH if there + # are no directories specified by -L. + hardcode_minus_L=yes + if test "$GCC" = yes && test -z "$link_static_flag"; then + # Neither direct hardcoding nor static linking is supported with a + # broken collect2. + hardcode_direct=unsupported + fi + ;; + + aix4* | aix5*) + if test "$host_cpu" = ia64; then + # On IA64, the linker does run time linking by default, so we don't + # have to do anything special. + aix_use_runtimelinking=no + exp_sym_flag='-Bexport' + no_entry_flag="" + else + aix_use_runtimelinking=no + + # Test if we are trying to use run time linking or normal + # AIX style linking. If -brtl is somewhere in LDFLAGS, we + # need to do runtime linking. + case $host_os in aix4.[23]|aix4.[23].*|aix5*) + for ld_flag in $LDFLAGS; do + if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then + aix_use_runtimelinking=yes + break + fi + done + esac + + exp_sym_flag='-bexport' + no_entry_flag='-bnoentry' + fi + + # When large executables or shared objects are built, AIX ld can + # have problems creating the table of contents. If linking a library + # or program results in "error TOC overflow" add -mminimal-toc to + # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not + # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. + + hardcode_direct=yes + archive_cmds='' + hardcode_libdir_separator=':' + if test "$GCC" = yes; then + case $host_os in aix4.[012]|aix4.[012].*) + collect2name=`${CC} -print-prog-name=collect2` + if test -f "$collect2name" && \ + strings "$collect2name" | grep resolve_lib_name >/dev/null + then + # We have reworked collect2 + hardcode_direct=yes + else + # We have old collect2 + hardcode_direct=unsupported + # It fails to find uninstalled libraries when the uninstalled + # path is not listed in the libpath. Setting hardcode_minus_L + # to unsupported forces relinking + hardcode_minus_L=yes + hardcode_libdir_flag_spec='-L$libdir' + hardcode_libdir_separator= + fi + esac + + shared_flag='-shared' + else + # not using gcc + if test "$host_cpu" = ia64; then + shared_flag='${wl}-G' + else + if test "$aix_use_runtimelinking" = yes; then + shared_flag='${wl}-G' + else + shared_flag='${wl}-bM:SRE' + fi + fi + fi + + # It seems that -bexpall can do strange things, so it is better to + # generate a list of symbols to export. + always_export_symbols=yes + if test "$aix_use_runtimelinking" = yes; then + # Warning - without using the other runtime loading flags (-brtl), + # -berok will link without error, but may produce a broken library. + allow_undefined_flag='-berok' + hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:/usr/lib:/lib' + archive_expsym_cmds="\$CC"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then echo "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols $shared_flag" + else + if test "$host_cpu" = ia64; then + hardcode_libdir_flag_spec='${wl}-R $libdir:/usr/lib:/lib' + allow_undefined_flag="-z nodefs" + archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname ${wl}-h$soname $libobjs $deplibs $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols" + else + hardcode_libdir_flag_spec='${wl}-bnolibpath ${wl}-blibpath:$libdir:/usr/lib:/lib' + # Warning - without using the other run time loading flags, + # -berok will link without error, but may produce a broken library. + allow_undefined_flag='${wl}-berok' + # This is a bit strange, but is similar to how AIX traditionally builds + # it's shared libraries. + archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs $compiler_flags ${allow_undefined_flag} '"\${wl}$no_entry_flag \${wl}$exp_sym_flag:\$export_symbols"' ~$AR -crlo $objdir/$libname$release.a $objdir/$soname' + fi + fi + ;; + + amigaos*) + archive_cmds='$rm $output_objdir/a2ixlibrary.data~$echo "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$echo "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$echo "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$echo "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_minus_L=yes + # see comment about different semantics on the GNU ld section + ld_shlibs=no + ;; + + cygwin* | mingw* | pw32*) + # When not using gcc, we currently assume that we are using + # Microsoft Visual C++. + # hardcode_libdir_flag_spec is actually meaningless, as there is + # no search path for DLLs. + hardcode_libdir_flag_spec=' ' + allow_undefined_flag=unsupported + # Tell ltmain to make .lib files, not .a files. + libext=lib + # FIXME: Setting linknames here is a bad hack. + archive_cmds='$CC -o $lib $libobjs $compiler_flags `echo "$deplibs" | sed -e '\''s/ -lc$//'\''` -link -dll~linknames=' + # The linker will automatically build a .lib file if we build a DLL. + old_archive_from_new_cmds='true' + # FIXME: Should let the user specify the lib program. + old_archive_cmds='lib /OUT:$oldlib$oldobjs$old_deplibs' + fix_srcfile_path='`cygpath -w "$srcfile"`' + ;; + + darwin* | rhapsody*) + case "$host_os" in + rhapsody* | darwin1.[012]) + allow_undefined_flag='-undefined suppress' + ;; + *) # Darwin 1.3 on + allow_undefined_flag='-flat_namespace -undefined suppress' + ;; + esac + # FIXME: Relying on posixy $() will cause problems for + # cross-compilation, but unfortunately the echo tests do not + # yet detect zsh echo's removal of \ escapes. + archive_cmds='$nonopt $(test "x$module" = xyes && echo -bundle || echo -dynamiclib) $allow_undefined_flag -o $lib $libobjs $deplibs$linker_flags -install_name $rpath/$soname $verstring' + # We need to add '_' to the symbols in $export_symbols first + #archive_expsym_cmds="$archive_cmds"' && strip -s $export_symbols' + hardcode_direct=yes + hardcode_shlibpath_var=no + whole_archive_flag_spec='-all_load $convenience' + ;; + + freebsd1*) + ld_shlibs=no + ;; + + # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor + # support. Future versions do this automatically, but an explicit c++rt0.o + # does not break anything, and helps significantly (at the cost of a little + # extra space). + freebsd2.2*) + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' + hardcode_libdir_flag_spec='-R$libdir' + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + # Unfortunately, older versions of FreeBSD 2 do not have this feature. + freebsd2*) + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=yes + hardcode_minus_L=yes + hardcode_shlibpath_var=no + ;; + + # FreeBSD 3 and greater uses gcc -shared to do shared libraries. + freebsd*) + archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags' + hardcode_libdir_flag_spec='-R$libdir' + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + hpux9* | hpux10* | hpux11*) + case $host_os in + hpux9*) archive_cmds='$rm $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' ;; + *) archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' ;; + esac + hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' + hardcode_libdir_separator=: + hardcode_direct=yes + hardcode_minus_L=yes # Not in the search PATH, but as the default + # location of the library. + export_dynamic_flag_spec='${wl}-E' + ;; + + irix5* | irix6*) + if test "$GCC" = yes; then + archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + fi + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator=: + link_all_deplibs=yes + ;; + + netbsd*) + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out + else + archive_cmds='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF + fi + hardcode_libdir_flag_spec='-R$libdir' + hardcode_direct=yes + hardcode_shlibpath_var=no + ;; + + newsos6) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=yes + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator=: + hardcode_shlibpath_var=no + ;; + + openbsd*) + hardcode_direct=yes + hardcode_shlibpath_var=no + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='${wl}-rpath,$libdir' + export_dynamic_flag_spec='${wl}-E' + else + case "$host_os" in + openbsd[01].* | openbsd2.[0-7] | openbsd2.[0-7].*) + archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='-R$libdir' + ;; + *) + archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='${wl}-rpath,$libdir' + ;; + esac + fi + ;; + + os2*) + hardcode_libdir_flag_spec='-L$libdir' + hardcode_minus_L=yes + allow_undefined_flag=unsupported + archive_cmds='$echo "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$echo "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~$echo DATA >> $output_objdir/$libname.def~$echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~$echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' + old_archive_from_new_cmds='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' + ;; + + osf3*) + if test "$GCC" = yes; then + allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + else + allow_undefined_flag=' -expect_unresolved \*' + archive_cmds='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + fi + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + hardcode_libdir_separator=: + ;; + + osf4* | osf5*) # as osf3* with the addition of -msym flag + if test "$GCC" = yes; then + allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' + archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && echo ${wl}-set_version ${wl}$verstring` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' + else + allow_undefined_flag=' -expect_unresolved \*' + archive_cmds='$LD -shared${allow_undefined_flag} $libobjs $deplibs $linker_flags -msym -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${output_objdir}/so_locations -o $lib' + archive_expsym_cmds='for i in `cat $export_symbols`; do printf "-exported_symbol " >> $lib.exp; echo "\$i" >> $lib.exp; done; echo "-hidden">> $lib.exp~ + $LD -shared${allow_undefined_flag} -input $lib.exp $linker_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && echo -set_version $verstring` -update_registry ${objdir}/so_locations -o $lib~$rm $lib.exp' + + #Both c and cxx compiler support -rpath directly + hardcode_libdir_flag_spec='-rpath $libdir' + fi + hardcode_libdir_separator=: + ;; + + sco3.2v5*) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var=no + runpath_var=LD_RUN_PATH + hardcode_runpath_var=yes + export_dynamic_flag_spec='${wl}-Bexport' + ;; + + solaris*) + # gcc --version < 3.0 without binutils cannot create self contained + # shared libraries reliably, requiring libgcc.a to resolve some of + # the object symbols generated in some cases. Libraries that use + # assert need libgcc.a to resolve __eprintf, for example. Linking + # a copy of libgcc.a into every shared library to guarantee resolving + # such symbols causes other problems: According to Tim Van Holder + # , C++ libraries end up with a separate + # (to the application) exception stack for one thing. + no_undefined_flag=' -z defs' + if test "$GCC" = yes; then + case `$CC --version 2>/dev/null` in + [12].*) + cat <&2 + +*** Warning: Releases of GCC earlier than version 3.0 cannot reliably +*** create self contained shared libraries on Solaris systems, without +*** introducing a dependency on libgcc.a. Therefore, libtool is disabling +*** -no-undefined support, which will at least allow you to build shared +*** libraries. However, you may find that when you link such libraries +*** into an application without using GCC, you have to manually add +*** \`gcc --print-libgcc-file-name\` to the link command. We urge you to +*** upgrade to a newer version of GCC. Another option is to rebuild your +*** current GCC to use the GNU linker from GNU binutils 2.9.1 or newer. + +EOF + no_undefined_flag= + ;; + esac + fi + # $CC -shared without GNU ld will not create a library from C++ + # object files and a static libstdc++, better avoid it by now + archive_cmds='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' + archive_expsym_cmds='$echo "{ global:" > $lib.exp~cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp' + hardcode_libdir_flag_spec='-R$libdir' + hardcode_shlibpath_var=no + case $host_os in + solaris2.[0-5] | solaris2.[0-5].*) ;; + *) # Supported since Solaris 2.6 (maybe 2.5.1?) + whole_archive_flag_spec='-z allextract$convenience -z defaultextract' ;; + esac + link_all_deplibs=yes + ;; + + sunos4*) + if test "x$host_vendor" = xsequent; then + # Use $CC to link under sequent, because it throws in some extra .o + # files that make .init and .fini sections work. + archive_cmds='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' + fi + hardcode_libdir_flag_spec='-L$libdir' + hardcode_direct=yes + hardcode_minus_L=yes + hardcode_shlibpath_var=no + ;; + + sysv4) + if test "x$host_vendor" = xsno; then + archive_cmds='$LD -G -Bsymbolic -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=yes # is this really true??? + else + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=no #Motorola manual says yes, but my tests say they lie + fi + runpath_var='LD_RUN_PATH' + hardcode_shlibpath_var=no + ;; + + sysv4.3*) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var=no + export_dynamic_flag_spec='-Bexport' + ;; + + sysv5*) + no_undefined_flag=' -z text' + # $CC -shared without GNU ld will not create a library from C++ + # object files and a static libstdc++, better avoid it by now + archive_cmds='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' + archive_expsym_cmds='$echo "{ global:" > $lib.exp~cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $lib.exp~$echo "local: *; };" >> $lib.exp~ + $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$rm $lib.exp' + hardcode_libdir_flag_spec= + hardcode_shlibpath_var=no + runpath_var='LD_RUN_PATH' + ;; + + uts4*) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_shlibpath_var=no + ;; + + dgux*) + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_libdir_flag_spec='-L$libdir' + hardcode_shlibpath_var=no + ;; + + sysv4*MP*) + if test -d /usr/nec; then + archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' + hardcode_shlibpath_var=no + runpath_var=LD_RUN_PATH + hardcode_runpath_var=yes + ld_shlibs=yes + fi + ;; + + sysv4.2uw2*) + archive_cmds='$LD -G -o $lib $libobjs $deplibs $linker_flags' + hardcode_direct=yes + hardcode_minus_L=no + hardcode_shlibpath_var=no + hardcode_runpath_var=yes + runpath_var=LD_RUN_PATH + ;; + + sysv5uw7* | unixware7*) + no_undefined_flag='${wl}-z ${wl}text' + if test "$GCC" = yes; then + archive_cmds='$CC -shared ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + else + archive_cmds='$CC -G ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + fi + runpath_var='LD_RUN_PATH' + hardcode_shlibpath_var=no + ;; + + *) + ld_shlibs=no + ;; + esac +fi +echo "$ac_t""$ld_shlibs" 1>&6 +test "$ld_shlibs" = no && can_build_shared=no +## +## END FIXME + +## FIXME: this should be a separate macro +## +# Check hardcoding attributes. +echo $ac_n "checking how to hardcode library paths into programs""... $ac_c" 1>&6 +echo "configure:3799: checking how to hardcode library paths into programs" >&5 +hardcode_action= +if test -n "$hardcode_libdir_flag_spec" || \ + test -n "$runpath_var"; then + + # We can hardcode non-existant directories. + if test "$hardcode_direct" != no && + # If the only mechanism to avoid hardcoding is shlibpath_var, we + # have to relink, otherwise we might link with an installed library + # when we should be linking with a yet-to-be-installed one + ## test "$hardcode_shlibpath_var" != no && + test "$hardcode_minus_L" != no; then + # Linking always hardcodes the temporary library directory. + hardcode_action=relink + else + # We can link without hardcoding, and we can hardcode nonexisting dirs. + hardcode_action=immediate + fi +else + # We cannot hardcode anything, or else we can only hardcode existing + # directories. + hardcode_action=unsupported +fi +echo "$ac_t""$hardcode_action" 1>&6 +## +## END FIXME + +## FIXME: this should be a separate macro +## +striplib= +old_striplib= +echo $ac_n "checking whether stripping libraries is possible""... $ac_c" 1>&6 +echo "configure:3831: checking whether stripping libraries is possible" >&5 +if test -n "$STRIP" && $STRIP -V 2>&1 | grep "GNU strip" >/dev/null; then + test -z "$old_striplib" && old_striplib="$STRIP --strip-debug" + test -z "$striplib" && striplib="$STRIP --strip-unneeded" + echo "$ac_t""yes" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi +## +## END FIXME + +reload_cmds='$LD$reload_flag -o $output$reload_objs' +test -z "$deplibs_check_method" && deplibs_check_method=unknown + +## FIXME: this should be a separate macro +## +# PORTME Fill in your ld.so characteristics +echo $ac_n "checking dynamic linker characteristics""... $ac_c" 1>&6 +echo "configure:3849: checking dynamic linker characteristics" >&5 +library_names_spec= +libname_spec='lib$name' +soname_spec= +postinstall_cmds= +postuninstall_cmds= +finish_cmds= +finish_eval= +shlibpath_var= +shlibpath_overrides_runpath=unknown +version_type=none +dynamic_linker="$host_os ld.so" +sys_lib_dlsearch_path_spec="/lib /usr/lib" +sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" + +case $host_os in +aix3*) + version_type=linux + library_names_spec='${libname}${release}.so$versuffix $libname.a' + shlibpath_var=LIBPATH + + # AIX has no versioning support, so we append a major version to the name. + soname_spec='${libname}${release}.so$major' + ;; + +aix4* | aix5*) + version_type=linux + if test "$host_cpu" = ia64; then + # AIX 5 supports IA64 + library_names_spec='${libname}${release}.so$major ${libname}${release}.so$versuffix $libname.so' + shlibpath_var=LD_LIBRARY_PATH + else + # With GCC up to 2.95.x, collect2 would create an import file + # for dependence libraries. The import file would start with + # the line `#! .'. This would cause the generated library to + # depend on `.', always an invalid library. This was fixed in + # development snapshots of GCC prior to 3.0. + case $host_os in + aix4 | aix4.[01] | aix4.[01].*) + if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' + echo ' yes ' + echo '#endif'; } | ${CC} -E - | grep yes > /dev/null; then + : + else + can_build_shared=no + fi + ;; + esac + # AIX (on Power*) has no versioning support, so currently we can + # not hardcode correct soname into executable. Probably we can + # add versioning support to collect2, so additional links can + # be useful in future. + if test "$aix_use_runtimelinking" = yes; then + # If using run time linking (on AIX 4.2 or later) use lib.so + # instead of lib.a to let people know that these are not + # typical AIX shared libraries. + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + else + # We preserve .a as extension for shared libraries through AIX4.2 + # and later when we are not doing run time linking. + library_names_spec='${libname}${release}.a $libname.a' + soname_spec='${libname}${release}.so$major' + fi + shlibpath_var=LIBPATH + fi + ;; + +amigaos*) + library_names_spec='$libname.ixlibrary $libname.a' + # Create ${libname}_ixlibrary.a entries in /sys/libs. + finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`$echo "X$lib" | $Xsed -e '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $rm /sys/libs/${libname}_ixlibrary.a; $show "(cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a)"; (cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a) || exit 1; done' + ;; + +beos*) + library_names_spec='${libname}.so' + dynamic_linker="$host_os ld.so" + shlibpath_var=LIBRARY_PATH + ;; + +bsdi4*) + version_type=linux + need_version=no + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" + sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" + export_dynamic_flag_spec=-rdynamic + # the default ld.so.conf also contains /usr/contrib/lib and + # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow + # libtool to hard-code these into programs + ;; + +cygwin* | mingw* | pw32*) + version_type=windows + need_version=no + need_lib_prefix=no + case $GCC,$host_os in + yes,cygwin*) + library_names_spec='$libname.dll.a' + soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | sed -e 's/[.]/-/g'`${versuffix}.dll' + postinstall_cmds='dlpath=`bash 2>&1 -c '\''. $dir/${file}i;echo \$dlname'\''`~ + dldir=$destdir/`dirname \$dlpath`~ + test -d \$dldir || mkdir -p \$dldir~ + $install_prog .libs/$dlname \$dldir/$dlname' + postuninstall_cmds='dldll=`bash 2>&1 -c '\''. $file; echo \$dlname'\''`~ + dlpath=$dir/\$dldll~ + $rm \$dlpath' + ;; + yes,mingw*) + library_names_spec='${libname}`echo ${release} | sed -e 's/[.]/-/g'`${versuffix}.dll' + sys_lib_search_path_spec=`$CC -print-search-dirs | grep "^libraries:" | sed -e "s/^libraries://" -e "s/;/ /g"` + ;; + yes,pw32*) + library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | sed -e 's/./-/g'`${versuffix}.dll' + ;; + *) + library_names_spec='${libname}`echo ${release} | sed -e 's/[.]/-/g'`${versuffix}.dll $libname.lib' + ;; + esac + dynamic_linker='Win32 ld.exe' + # FIXME: first we should search . and the directory the executable is in + shlibpath_var=PATH + ;; + +darwin* | rhapsody*) + dynamic_linker="$host_os dyld" + version_type=darwin + need_lib_prefix=no + need_version=no + # FIXME: Relying on posixy $() will cause problems for + # cross-compilation, but unfortunately the echo tests do not + # yet detect zsh echo's removal of \ escapes. + library_names_spec='${libname}${release}${versuffix}.$(test .$module = .yes && echo so || echo dylib) ${libname}${release}${major}.$(test .$module = .yes && echo so || echo dylib) ${libname}.$(test .$module = .yes && echo so || echo dylib)' + soname_spec='${libname}${release}${major}.$(test .$module = .yes && echo so || echo dylib)' + shlibpath_overrides_runpath=yes + shlibpath_var=DYLD_LIBRARY_PATH + ;; + +freebsd1*) + dynamic_linker=no + ;; + +freebsd*) + objformat=`test -x /usr/bin/objformat && /usr/bin/objformat || echo aout` + version_type=freebsd-$objformat + case $version_type in + freebsd-elf*) + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so $libname.so' + need_version=no + need_lib_prefix=no + ;; + freebsd-*) + library_names_spec='${libname}${release}.so$versuffix $libname.so$versuffix' + need_version=yes + ;; + esac + shlibpath_var=LD_LIBRARY_PATH + case $host_os in + freebsd2*) + shlibpath_overrides_runpath=yes + ;; + *) + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + ;; + esac + ;; + +gnu*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so${major} ${libname}.so' + soname_spec='${libname}${release}.so$major' + shlibpath_var=LD_LIBRARY_PATH + hardcode_into_libs=yes + ;; + +hpux9* | hpux10* | hpux11*) + # Give a soname corresponding to the major version so that dld.sl refuses to + # link against other versions. + dynamic_linker="$host_os dld.sl" + version_type=sunos + need_lib_prefix=no + need_version=no + shlibpath_var=SHLIB_PATH + shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH + library_names_spec='${libname}${release}.sl$versuffix ${libname}${release}.sl$major $libname.sl' + soname_spec='${libname}${release}.sl$major' + # HP-UX runs *really* slowly unless shared libraries are mode 555. + postinstall_cmds='chmod 555 $lib' + ;; + +irix5* | irix6*) + version_type=irix + need_lib_prefix=no + need_version=no + soname_spec='${libname}${release}.so$major' + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major ${libname}${release}.so $libname.so' + case $host_os in + irix5*) + libsuff= shlibsuff= + ;; + *) + case $LD in # libtool.m4 will add one of these switches to LD + *-32|*"-32 ") libsuff= shlibsuff= libmagic=32-bit;; + *-n32|*"-n32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; + *-64|*"-64 ") libsuff=64 shlibsuff=64 libmagic=64-bit;; + *) libsuff= shlibsuff= libmagic=never-match;; + esac + ;; + esac + shlibpath_var=LD_LIBRARY${shlibsuff}_PATH + shlibpath_overrides_runpath=no + sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" + sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + ;; + +# No shared lib support for Linux oldld, aout, or coff. +linux-gnuoldld* | linux-gnuaout* | linux-gnucoff*) + dynamic_linker=no + ;; + +# This must be Linux ELF. +linux-gnu*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + # This implies no fast_install, which is unacceptable. + # Some rework will be needed to allow for fast_install + # before this can be enabled. + hardcode_into_libs=yes + + # We used to test for /lib/ld.so.1 and disable shared libraries on + # powerpc, because MkLinux only supported shared libraries with the + # GNU dynamic linker. Since this was broken with cross compilers, + # most powerpc-linux boxes support dynamic linking these days and + # people can always --disable-shared, the test was removed, and we + # assume the GNU/Linux dynamic linker is in use. + dynamic_linker='GNU/Linux ld.so' + ;; + +netbsd*) + version_type=sunos + need_lib_prefix=no + need_version=no + if echo __ELF__ | $CC -E - | grep __ELF__ >/dev/null; then + library_names_spec='${libname}${release}.so$versuffix ${libname}.so$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + dynamic_linker='NetBSD (a.out) ld.so' + else + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major ${libname}${release}.so ${libname}.so' + soname_spec='${libname}${release}.so$major' + dynamic_linker='NetBSD ld.elf_so' + fi + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + ;; + +newsos6) + version_type=linux + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + ;; + +openbsd*) + version_type=sunos + need_lib_prefix=no + need_version=no + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + case "$host_os" in + openbsd2.[89] | openbsd2.[89].*) + shlibpath_overrides_runpath=no + ;; + *) + shlibpath_overrides_runpath=yes + ;; + esac + else + shlibpath_overrides_runpath=yes + fi + library_names_spec='${libname}${release}.so$versuffix ${libname}.so$versuffix' + finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' + shlibpath_var=LD_LIBRARY_PATH + ;; + +os2*) + libname_spec='$name' + need_lib_prefix=no + library_names_spec='$libname.dll $libname.a' + dynamic_linker='OS/2 ld.exe' + shlibpath_var=LIBPATH + ;; + +osf3* | osf4* | osf5*) + version_type=osf + need_version=no + soname_spec='${libname}${release}.so' + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so $libname.so' + shlibpath_var=LD_LIBRARY_PATH + sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" + sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" + ;; + +sco3.2v5*) + version_type=osf + soname_spec='${libname}${release}.so$major' + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + shlibpath_var=LD_LIBRARY_PATH + ;; + +solaris*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + hardcode_into_libs=yes + # ldd complains unless libraries are executable + postinstall_cmds='chmod +x $lib' + ;; + +sunos4*) + version_type=sunos + library_names_spec='${libname}${release}.so$versuffix ${libname}.so$versuffix' + finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + if test "$with_gnu_ld" = yes; then + need_lib_prefix=no + fi + need_version=yes + ;; + +sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) + version_type=linux + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + shlibpath_var=LD_LIBRARY_PATH + case $host_vendor in + sni) + shlibpath_overrides_runpath=no + ;; + motorola) + need_lib_prefix=no + need_version=no + shlibpath_overrides_runpath=no + sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' + ;; + esac + ;; + +uts4*) + version_type=linux + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +dgux*) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}.so$versuffix ${libname}${release}.so$major $libname.so' + soname_spec='${libname}${release}.so$major' + shlibpath_var=LD_LIBRARY_PATH + ;; + +sysv4*MP*) + if test -d /usr/nec ;then + version_type=linux + library_names_spec='$libname.so.$versuffix $libname.so.$major $libname.so' + soname_spec='$libname.so.$major' + shlibpath_var=LD_LIBRARY_PATH + fi + ;; + +*) + dynamic_linker=no + ;; +esac +echo "$ac_t""$dynamic_linker" 1>&6 +test "$dynamic_linker" = no && can_build_shared=no +## +## END FIXME + +## FIXME: this should be a separate macro +## +# Report the final consequences. +echo $ac_n "checking if libtool supports shared libraries""... $ac_c" 1>&6 +echo "configure:4250: checking if libtool supports shared libraries" >&5 +echo "$ac_t""$can_build_shared" 1>&6 +## +## END FIXME + +## FIXME: this should be a separate macro +## +echo $ac_n "checking whether to build shared libraries""... $ac_c" 1>&6 +echo "configure:4258: checking whether to build shared libraries" >&5 +test "$can_build_shared" = "no" && enable_shared=no + +# On AIX, shared libraries and static libraries use the same namespace, and +# are all built from PIC. +case "$host_os" in +aix3*) + test "$enable_shared" = yes && enable_static=no + if test -n "$RANLIB"; then + archive_cmds="$archive_cmds~\$RANLIB \$lib" + postinstall_cmds='$RANLIB $lib' + fi + ;; + +aix4*) + if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then + test "$enable_shared" = yes && enable_static=no + fi + ;; +esac +echo "$ac_t""$enable_shared" 1>&6 +## +## END FIXME + +## FIXME: this should be a separate macro +## +echo $ac_n "checking whether to build static libraries""... $ac_c" 1>&6 +echo "configure:4285: checking whether to build static libraries" >&5 +# Make sure either enable_shared or enable_static is yes. +test "$enable_shared" = yes || enable_static=yes +echo "$ac_t""$enable_static" 1>&6 +## +## END FIXME + +if test "$hardcode_action" = relink; then + # Fast installation is not supported + enable_fast_install=no +elif test "$shlibpath_overrides_runpath" = yes || + test "$enable_shared" = no; then + # Fast installation is not necessary + enable_fast_install=needless +fi + +variables_saved_for_relink="PATH $shlibpath_var $runpath_var" +if test "$GCC" = yes; then + variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" +fi + +if test "x$enable_dlopen" != xyes; then + enable_dlopen=unknown + enable_dlopen_self=unknown + enable_dlopen_self_static=unknown +else + lt_cv_dlopen=no + lt_cv_dlopen_libs= + + case $host_os in + beos*) + lt_cv_dlopen="load_add_on" + lt_cv_dlopen_libs= + lt_cv_dlopen_self=yes + ;; + + cygwin* | mingw* | pw32*) + lt_cv_dlopen="LoadLibrary" + lt_cv_dlopen_libs= + ;; + + *) + echo $ac_n "checking for shl_load""... $ac_c" 1>&6 +echo "configure:4328: checking for shl_load" >&5 +if eval "test \"`echo '$''{'ac_cv_func_shl_load'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +/* Override any gcc2 internal prototype to avoid an error. */ +/* We use char because int might match the return type of a gcc2 + builtin and then its argument prototype would still apply. */ +char shl_load(); + +int main() { + +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined (__stub_shl_load) || defined (__stub___shl_load) +choke me +#else +shl_load(); +#endif + +; return 0; } +EOF +if { (eval echo configure:4356: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_func_shl_load=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_func_shl_load=no" +fi +rm -f conftest* +fi + +if eval "test \"`echo '$ac_cv_func_'shl_load`\" = yes"; then + echo "$ac_t""yes" 1>&6 + lt_cv_dlopen="shl_load" +else + echo "$ac_t""no" 1>&6 +echo $ac_n "checking for shl_load in -ldld""... $ac_c" 1>&6 +echo "configure:4374: checking for shl_load in -ldld" >&5 +ac_lib_var=`echo dld'_'shl_load | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-ldld $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-dld" +else + echo "$ac_t""no" 1>&6 +echo $ac_n "checking for dlopen""... $ac_c" 1>&6 +echo "configure:4412: checking for dlopen" >&5 +if eval "test \"`echo '$''{'ac_cv_func_dlopen'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +/* Override any gcc2 internal prototype to avoid an error. */ +/* We use char because int might match the return type of a gcc2 + builtin and then its argument prototype would still apply. */ +char dlopen(); + +int main() { + +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined (__stub_dlopen) || defined (__stub___dlopen) +choke me +#else +dlopen(); +#endif + +; return 0; } +EOF +if { (eval echo configure:4440: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_func_dlopen=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_func_dlopen=no" +fi +rm -f conftest* +fi + +if eval "test \"`echo '$ac_cv_func_'dlopen`\" = yes"; then + echo "$ac_t""yes" 1>&6 + lt_cv_dlopen="dlopen" +else + echo "$ac_t""no" 1>&6 +echo $ac_n "checking for dlopen in -ldl""... $ac_c" 1>&6 +echo "configure:4458: checking for dlopen in -ldl" >&5 +ac_lib_var=`echo dl'_'dlopen | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-ldl $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl" +else + echo "$ac_t""no" 1>&6 +echo $ac_n "checking for dlopen in -lsvld""... $ac_c" 1>&6 +echo "configure:4496: checking for dlopen in -lsvld" >&5 +ac_lib_var=`echo svld'_'dlopen | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-lsvld $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld" +else + echo "$ac_t""no" 1>&6 +echo $ac_n "checking for dld_link in -ldld""... $ac_c" 1>&6 +echo "configure:4534: checking for dld_link in -ldld" >&5 +ac_lib_var=`echo dld'_'dld_link | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-ldld $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-dld" +else + echo "$ac_t""no" 1>&6 +fi + + +fi + + +fi + + +fi + + +fi + + +fi + + ;; + esac + + if test "x$lt_cv_dlopen" != xno; then + enable_dlopen=yes + else + enable_dlopen=no + fi + + case $lt_cv_dlopen in + dlopen) + save_CPPFLAGS="$CPPFLAGS" + test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" + + save_LDFLAGS="$LDFLAGS" + eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" + + save_LIBS="$LIBS" + LIBS="$lt_cv_dlopen_libs $LIBS" + + echo $ac_n "checking whether a program can dlopen itself""... $ac_c" 1>&6 +echo "configure:4609: checking whether a program can dlopen itself" >&5 +if eval "test \"`echo '$''{'lt_cv_dlopen_self'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then : + lt_cv_dlopen_self=cross +else + lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 + lt_status=$lt_dlunknown + cat > conftest.$ac_ext < +#endif + +#include + +#ifdef RTLD_GLOBAL +# define LT_DLGLOBAL RTLD_GLOBAL +#else +# ifdef DL_GLOBAL +# define LT_DLGLOBAL DL_GLOBAL +# else +# define LT_DLGLOBAL 0 +# endif +#endif + +/* We may have to define LT_DLLAZY_OR_NOW in the command line if we + find out it does not work in some platform. */ +#ifndef LT_DLLAZY_OR_NOW +# ifdef RTLD_LAZY +# define LT_DLLAZY_OR_NOW RTLD_LAZY +# else +# ifdef DL_LAZY +# define LT_DLLAZY_OR_NOW DL_LAZY +# else +# ifdef RTLD_NOW +# define LT_DLLAZY_OR_NOW RTLD_NOW +# else +# ifdef DL_NOW +# define LT_DLLAZY_OR_NOW DL_NOW +# else +# define LT_DLLAZY_OR_NOW 0 +# endif +# endif +# endif +# endif +#endif + +#ifdef __cplusplus +extern "C" void exit (int); +#endif + +void fnord() { int i=42;} +int main () +{ + void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); + int status = $lt_dlunknown; + + if (self) + { + if (dlsym (self,"fnord")) status = $lt_dlno_uscore; + else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; + /* dlclose (self); */ + } + + exit (status); +} +EOF + if { (eval echo configure:4680: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} 2>/dev/null; then + (./conftest; exit; ) 2>/dev/null + lt_status=$? + case x$lt_status in + x$lt_dlno_uscore) lt_cv_dlopen_self=yes ;; + x$lt_dlneed_uscore) lt_cv_dlopen_self=yes ;; + x$lt_unknown|x*) lt_cv_dlopen_self=no ;; + esac + else : + # compilation failed + lt_cv_dlopen_self=no + fi +fi +rm -fr conftest* + + +fi + +echo "$ac_t""$lt_cv_dlopen_self" 1>&6 + + if test "x$lt_cv_dlopen_self" = xyes; then + LDFLAGS="$LDFLAGS $link_static_flag" + echo $ac_n "checking whether a statically linked program can dlopen itself""... $ac_c" 1>&6 +echo "configure:4703: checking whether a statically linked program can dlopen itself" >&5 +if eval "test \"`echo '$''{'lt_cv_dlopen_self_static'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then : + lt_cv_dlopen_self_static=cross +else + lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 + lt_status=$lt_dlunknown + cat > conftest.$ac_ext < +#endif + +#include + +#ifdef RTLD_GLOBAL +# define LT_DLGLOBAL RTLD_GLOBAL +#else +# ifdef DL_GLOBAL +# define LT_DLGLOBAL DL_GLOBAL +# else +# define LT_DLGLOBAL 0 +# endif +#endif + +/* We may have to define LT_DLLAZY_OR_NOW in the command line if we + find out it does not work in some platform. */ +#ifndef LT_DLLAZY_OR_NOW +# ifdef RTLD_LAZY +# define LT_DLLAZY_OR_NOW RTLD_LAZY +# else +# ifdef DL_LAZY +# define LT_DLLAZY_OR_NOW DL_LAZY +# else +# ifdef RTLD_NOW +# define LT_DLLAZY_OR_NOW RTLD_NOW +# else +# ifdef DL_NOW +# define LT_DLLAZY_OR_NOW DL_NOW +# else +# define LT_DLLAZY_OR_NOW 0 +# endif +# endif +# endif +# endif +#endif + +#ifdef __cplusplus +extern "C" void exit (int); +#endif + +void fnord() { int i=42;} +int main () +{ + void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); + int status = $lt_dlunknown; + + if (self) + { + if (dlsym (self,"fnord")) status = $lt_dlno_uscore; + else if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; + /* dlclose (self); */ + } + + exit (status); +} +EOF + if { (eval echo configure:4774: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} 2>/dev/null; then + (./conftest; exit; ) 2>/dev/null + lt_status=$? + case x$lt_status in + x$lt_dlno_uscore) lt_cv_dlopen_self_static=yes ;; + x$lt_dlneed_uscore) lt_cv_dlopen_self_static=yes ;; + x$lt_unknown|x*) lt_cv_dlopen_self_static=no ;; + esac + else : + # compilation failed + lt_cv_dlopen_self_static=no + fi +fi +rm -fr conftest* + + +fi + +echo "$ac_t""$lt_cv_dlopen_self_static" 1>&6 + fi + + CPPFLAGS="$save_CPPFLAGS" + LDFLAGS="$save_LDFLAGS" + LIBS="$save_LIBS" + ;; + esac + + case $lt_cv_dlopen_self in + yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; + *) enable_dlopen_self=unknown ;; + esac + + case $lt_cv_dlopen_self_static in + yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; + *) enable_dlopen_self_static=unknown ;; + esac +fi + + +## FIXME: this should be a separate macro +## +if test "$enable_shared" = yes && test "$GCC" = yes; then + case $archive_cmds in + *'~'*) + # FIXME: we may have to deal with multi-command sequences. + ;; + '$CC '*) + # Test whether the compiler implicitly links with -lc since on some + # systems, -lgcc has to come before -lc. If gcc already passes -lc + # to ld, don't add -lc before -lgcc. + echo $ac_n "checking whether -lc should be explicitly linked in""... $ac_c" 1>&6 +echo "configure:4825: checking whether -lc should be explicitly linked in" >&5 + if eval "test \"`echo '$''{'lt_cv_archive_cmds_need_lc'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + $rm conftest* + echo 'static int dummy;' > conftest.$ac_ext + + if { (eval echo configure:4832: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + soname=conftest + lib=conftest + libobjs=conftest.$ac_objext + deplibs= + wl=$lt_cv_prog_cc_wl + compiler_flags=-v + linker_flags=-v + verstring= + output_objdir=. + libname=conftest + save_allow_undefined_flag=$allow_undefined_flag + allow_undefined_flag= + if { (eval echo configure:4845: \"$archive_cmds 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1\") 1>&5; (eval $archive_cmds 2\>\&1 \| grep \" -lc \" \>/dev/null 2\>\&1) 2>&5; } + then + lt_cv_archive_cmds_need_lc=no + else + lt_cv_archive_cmds_need_lc=yes + fi + allow_undefined_flag=$save_allow_undefined_flag + else + cat conftest.err 1>&5 + fi +fi + + echo "$ac_t""$lt_cv_archive_cmds_need_lc" 1>&6 + ;; + esac +fi +need_lc=${lt_cv_archive_cmds_need_lc-yes} +## +## END FIXME + +## FIXME: this should be a separate macro +## +# The second clause should only fire when bootstrapping the +# libtool distribution, otherwise you forgot to ship ltmain.sh +# with your package, and you will get complaints that there are +# no rules to generate ltmain.sh. +if test -f "$ltmain"; then + : +else + # If there is no Makefile yet, we rely on a make rule to execute + # `config.status --recheck' to rerun these tests and create the + # libtool script then. + test -f Makefile && make "$ltmain" +fi + +if test -f "$ltmain"; then + trap "$rm \"${ofile}T\"; exit 1" 1 2 15 + $rm -f "${ofile}T" + + echo creating $ofile + + # Now quote all the things that may contain metacharacters while being + # careful not to overquote the AC_SUBSTed values. We take copies of the + # variables and quote the copies for generation of the libtool script. + for var in echo old_CC old_CFLAGS \ + AR AR_FLAGS CC LD LN_S NM SHELL \ + reload_flag reload_cmds wl \ + pic_flag link_static_flag no_builtin_flag export_dynamic_flag_spec \ + thread_safe_flag_spec whole_archive_flag_spec libname_spec \ + library_names_spec soname_spec \ + RANLIB old_archive_cmds old_archive_from_new_cmds old_postinstall_cmds \ + old_postuninstall_cmds archive_cmds archive_expsym_cmds postinstall_cmds \ + postuninstall_cmds extract_expsyms_cmds old_archive_from_expsyms_cmds \ + old_striplib striplib file_magic_cmd export_symbols_cmds \ + deplibs_check_method allow_undefined_flag no_undefined_flag \ + finish_cmds finish_eval global_symbol_pipe global_symbol_to_cdecl \ + global_symbol_to_c_name_address \ + hardcode_libdir_flag_spec hardcode_libdir_separator \ + sys_lib_search_path_spec sys_lib_dlsearch_path_spec \ + compiler_c_o compiler_o_lo need_locks exclude_expsyms include_expsyms; do + + case $var in + reload_cmds | old_archive_cmds | old_archive_from_new_cmds | \ + old_postinstall_cmds | old_postuninstall_cmds | \ + export_symbols_cmds | archive_cmds | archive_expsym_cmds | \ + extract_expsyms_cmds | old_archive_from_expsyms_cmds | \ + postinstall_cmds | postuninstall_cmds | \ + finish_cmds | sys_lib_search_path_spec | sys_lib_dlsearch_path_spec) + # Double-quote double-evaled strings. + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$double_quote_subst\" -e \"\$sed_quote_subst\" -e \"\$delay_variable_subst\"\`\\\"" + ;; + *) + eval "lt_$var=\\\"\`\$echo \"X\$$var\" | \$Xsed -e \"\$sed_quote_subst\"\`\\\"" + ;; + esac + done + + cat <<__EOF__ > "${ofile}T" +#! $SHELL + +# `$echo "$ofile" | sed 's%^.*/%%'` - Provide generalized library-building support services. +# Generated automatically by $PROGRAM (GNU $PACKAGE $VERSION$TIMESTAMP) +# NOTE: Changes made to this file will be lost: look at ltmain.sh. +# +# Copyright (C) 1996-2000 Free Software Foundation, Inc. +# Originally by Gordon Matzigkeit , 1996 +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +# +# As a special exception to the GNU General Public License, if you +# distribute this file as part of a program that contains a +# configuration script generated by Autoconf, you may include it under +# the same distribution terms that you use for the rest of that program. + +# Sed that helps us avoid accidentally triggering echo(1) options like -n. +Xsed="sed -e s/^X//" + +# The HP-UX ksh and POSIX shell print the target directory to stdout +# if CDPATH is set. +if test "X\${CDPATH+set}" = Xset; then CDPATH=:; export CDPATH; fi + +# ### BEGIN LIBTOOL CONFIG + +# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: + +# Shell to use when invoking shell scripts. +SHELL=$lt_SHELL + +# Whether or not to build shared libraries. +build_libtool_libs=$enable_shared + +# Whether or not to build static libraries. +build_old_libs=$enable_static + +# Whether or not to add -lc for building shared libraries. +build_libtool_need_lc=$need_lc + +# Whether or not to optimize for fast installation. +fast_install=$enable_fast_install + +# The host system. +host_alias=$host_alias +host=$host + +# An echo program that does not interpret backslashes. +echo=$lt_echo + +# The archiver. +AR=$lt_AR +AR_FLAGS=$lt_AR_FLAGS + +# The default C compiler. +CC=$lt_CC + +# Is the compiler the GNU C compiler? +with_gcc=$GCC + +# The linker used to build libraries. +LD=$lt_LD + +# Whether we need hard or soft links. +LN_S=$lt_LN_S + +# A BSD-compatible nm program. +NM=$lt_NM + +# A symbol stripping program +STRIP=$STRIP + +# Used to examine libraries when file_magic_cmd begins "file" +MAGIC_CMD=$MAGIC_CMD + +# Used on cygwin: DLL creation program. +DLLTOOL="$DLLTOOL" + +# Used on cygwin: object dumper. +OBJDUMP="$OBJDUMP" + +# Used on cygwin: assembler. +AS="$AS" + +# The name of the directory that contains temporary libtool files. +objdir=$objdir + +# How to create reloadable object files. +reload_flag=$lt_reload_flag +reload_cmds=$lt_reload_cmds + +# How to pass a linker flag through the compiler. +wl=$lt_wl + +# Object file suffix (normally "o"). +objext="$ac_objext" + +# Old archive suffix (normally "a"). +libext="$libext" + +# Executable file suffix (normally ""). +exeext="$exeext" + +# Additional compiler flags for building library objects. +pic_flag=$lt_pic_flag +pic_mode=$pic_mode + +# Does compiler simultaneously support -c and -o options? +compiler_c_o=$lt_compiler_c_o + +# Can we write directly to a .lo ? +compiler_o_lo=$lt_compiler_o_lo + +# Must we lock files when doing compilation ? +need_locks=$lt_need_locks + +# Do we need the lib prefix for modules? +need_lib_prefix=$need_lib_prefix + +# Do we need a version for libraries? +need_version=$need_version + +# Whether dlopen is supported. +dlopen_support=$enable_dlopen + +# Whether dlopen of programs is supported. +dlopen_self=$enable_dlopen_self + +# Whether dlopen of statically linked programs is supported. +dlopen_self_static=$enable_dlopen_self_static + +# Compiler flag to prevent dynamic linking. +link_static_flag=$lt_link_static_flag + +# Compiler flag to turn off builtin functions. +no_builtin_flag=$lt_no_builtin_flag + +# Compiler flag to allow reflexive dlopens. +export_dynamic_flag_spec=$lt_export_dynamic_flag_spec + +# Compiler flag to generate shared objects directly from archives. +whole_archive_flag_spec=$lt_whole_archive_flag_spec + +# Compiler flag to generate thread-safe objects. +thread_safe_flag_spec=$lt_thread_safe_flag_spec + +# Library versioning type. +version_type=$version_type + +# Format of library name prefix. +libname_spec=$lt_libname_spec + +# List of archive names. First name is the real one, the rest are links. +# The last name is the one that the linker finds with -lNAME. +library_names_spec=$lt_library_names_spec + +# The coded name of the library, if different from the real name. +soname_spec=$lt_soname_spec + +# Commands used to build and install an old-style archive. +RANLIB=$lt_RANLIB +old_archive_cmds=$lt_old_archive_cmds +old_postinstall_cmds=$lt_old_postinstall_cmds +old_postuninstall_cmds=$lt_old_postuninstall_cmds + +# Create an old-style archive from a shared archive. +old_archive_from_new_cmds=$lt_old_archive_from_new_cmds + +# Create a temporary old-style archive to link instead of a shared archive. +old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds + +# Commands used to build and install a shared archive. +archive_cmds=$lt_archive_cmds +archive_expsym_cmds=$lt_archive_expsym_cmds +postinstall_cmds=$lt_postinstall_cmds +postuninstall_cmds=$lt_postuninstall_cmds + +# Commands to strip libraries. +old_striplib=$lt_old_striplib +striplib=$lt_striplib + +# Method to check whether dependent libraries are shared objects. +deplibs_check_method=$lt_deplibs_check_method + +# Command to use when deplibs_check_method == file_magic. +file_magic_cmd=$lt_file_magic_cmd + +# Flag that allows shared libraries with undefined symbols to be built. +allow_undefined_flag=$lt_allow_undefined_flag + +# Flag that forces no undefined symbols. +no_undefined_flag=$lt_no_undefined_flag + +# Commands used to finish a libtool library installation in a directory. +finish_cmds=$lt_finish_cmds + +# Same as above, but a single script fragment to be evaled but not shown. +finish_eval=$lt_finish_eval + +# Take the output of nm and produce a listing of raw symbols and C names. +global_symbol_pipe=$lt_global_symbol_pipe + +# Transform the output of nm in a proper C declaration +global_symbol_to_cdecl=$lt_global_symbol_to_cdecl + +# Transform the output of nm in a C name address pair +global_symbol_to_c_name_address=$lt_global_symbol_to_c_name_address + +# This is the shared library runtime path variable. +runpath_var=$runpath_var + +# This is the shared library path variable. +shlibpath_var=$shlibpath_var + +# Is shlibpath searched before the hard-coded library search path? +shlibpath_overrides_runpath=$shlibpath_overrides_runpath + +# How to hardcode a shared library path into an executable. +hardcode_action=$hardcode_action + +# Whether we should hardcode library paths into libraries. +hardcode_into_libs=$hardcode_into_libs + +# Flag to hardcode \$libdir into a binary during linking. +# This must work even if \$libdir does not exist. +hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec + +# Whether we need a single -rpath flag with a separated argument. +hardcode_libdir_separator=$lt_hardcode_libdir_separator + +# Set to yes if using DIR/libNAME.so during linking hardcodes DIR into the +# resulting binary. +hardcode_direct=$hardcode_direct + +# Set to yes if using the -LDIR flag during linking hardcodes DIR into the +# resulting binary. +hardcode_minus_L=$hardcode_minus_L + +# Set to yes if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into +# the resulting binary. +hardcode_shlibpath_var=$hardcode_shlibpath_var + +# Variables whose values should be saved in libtool wrapper scripts and +# restored at relink time. +variables_saved_for_relink="$variables_saved_for_relink" + +# Whether libtool must link a program against all its dependency libraries. +link_all_deplibs=$link_all_deplibs + +# Compile-time system search path for libraries +sys_lib_search_path_spec=$lt_sys_lib_search_path_spec + +# Run-time system search path for libraries +sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec + +# Fix the shell variable \$srcfile for the compiler. +fix_srcfile_path="$fix_srcfile_path" + +# Set to yes if exported symbols are required. +always_export_symbols=$always_export_symbols + +# The commands to list exported symbols. +export_symbols_cmds=$lt_export_symbols_cmds + +# The commands to extract the exported symbol list from a shared archive. +extract_expsyms_cmds=$lt_extract_expsyms_cmds + +# Symbols that should not be listed in the preloaded symbols. +exclude_expsyms=$lt_exclude_expsyms + +# Symbols that must always be exported. +include_expsyms=$lt_include_expsyms + +# ### END LIBTOOL CONFIG + +__EOF__ + + case $host_os in + aix3*) + cat <<\EOF >> "${ofile}T" + +# AIX sometimes has problems with the GCC collect2 program. For some +# reason, if we set the COLLECT_NAMES environment variable, the problems +# vanish in a puff of smoke. +if test "X${COLLECT_NAMES+set}" != Xset; then + COLLECT_NAMES= + export COLLECT_NAMES +fi +EOF + ;; + esac + + case $host_os in + cygwin* | mingw* | pw32* | os2*) + cat <<'EOF' >> "${ofile}T" + # This is a source program that is used to create dlls on Windows + # Don't remove nor modify the starting and closing comments +# /* ltdll.c starts here */ +# #define WIN32_LEAN_AND_MEAN +# #include +# #undef WIN32_LEAN_AND_MEAN +# #include +# +# #ifndef __CYGWIN__ +# # ifdef __CYGWIN32__ +# # define __CYGWIN__ __CYGWIN32__ +# # endif +# #endif +# +# #ifdef __cplusplus +# extern "C" { +# #endif +# BOOL APIENTRY DllMain (HINSTANCE hInst, DWORD reason, LPVOID reserved); +# #ifdef __cplusplus +# } +# #endif +# +# #ifdef __CYGWIN__ +# #include +# DECLARE_CYGWIN_DLL( DllMain ); +# #endif +# HINSTANCE __hDllInstance_base; +# +# BOOL APIENTRY +# DllMain (HINSTANCE hInst, DWORD reason, LPVOID reserved) +# { +# __hDllInstance_base = hInst; +# return TRUE; +# } +# /* ltdll.c ends here */ + # This is a source program that is used to create import libraries + # on Windows for dlls which lack them. Don't remove nor modify the + # starting and closing comments +# /* impgen.c starts here */ +# /* Copyright (C) 1999-2000 Free Software Foundation, Inc. +# +# This file is part of GNU libtool. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +# */ +# +# #include /* for printf() */ +# #include /* for open(), lseek(), read() */ +# #include /* for O_RDONLY, O_BINARY */ +# #include /* for strdup() */ +# +# /* O_BINARY isn't required (or even defined sometimes) under Unix */ +# #ifndef O_BINARY +# #define O_BINARY 0 +# #endif +# +# static unsigned int +# pe_get16 (fd, offset) +# int fd; +# int offset; +# { +# unsigned char b[2]; +# lseek (fd, offset, SEEK_SET); +# read (fd, b, 2); +# return b[0] + (b[1]<<8); +# } +# +# static unsigned int +# pe_get32 (fd, offset) +# int fd; +# int offset; +# { +# unsigned char b[4]; +# lseek (fd, offset, SEEK_SET); +# read (fd, b, 4); +# return b[0] + (b[1]<<8) + (b[2]<<16) + (b[3]<<24); +# } +# +# static unsigned int +# pe_as32 (ptr) +# void *ptr; +# { +# unsigned char *b = ptr; +# return b[0] + (b[1]<<8) + (b[2]<<16) + (b[3]<<24); +# } +# +# int +# main (argc, argv) +# int argc; +# char *argv[]; +# { +# int dll; +# unsigned long pe_header_offset, opthdr_ofs, num_entries, i; +# unsigned long export_rva, export_size, nsections, secptr, expptr; +# unsigned long name_rvas, nexp; +# unsigned char *expdata, *erva; +# char *filename, *dll_name; +# +# filename = argv[1]; +# +# dll = open(filename, O_RDONLY|O_BINARY); +# if (dll < 1) +# return 1; +# +# dll_name = filename; +# +# for (i=0; filename[i]; i++) +# if (filename[i] == '/' || filename[i] == '\\' || filename[i] == ':') +# dll_name = filename + i +1; +# +# pe_header_offset = pe_get32 (dll, 0x3c); +# opthdr_ofs = pe_header_offset + 4 + 20; +# num_entries = pe_get32 (dll, opthdr_ofs + 92); +# +# if (num_entries < 1) /* no exports */ +# return 1; +# +# export_rva = pe_get32 (dll, opthdr_ofs + 96); +# export_size = pe_get32 (dll, opthdr_ofs + 100); +# nsections = pe_get16 (dll, pe_header_offset + 4 +2); +# secptr = (pe_header_offset + 4 + 20 + +# pe_get16 (dll, pe_header_offset + 4 + 16)); +# +# expptr = 0; +# for (i = 0; i < nsections; i++) +# { +# char sname[8]; +# unsigned long secptr1 = secptr + 40 * i; +# unsigned long vaddr = pe_get32 (dll, secptr1 + 12); +# unsigned long vsize = pe_get32 (dll, secptr1 + 16); +# unsigned long fptr = pe_get32 (dll, secptr1 + 20); +# lseek(dll, secptr1, SEEK_SET); +# read(dll, sname, 8); +# if (vaddr <= export_rva && vaddr+vsize > export_rva) +# { +# expptr = fptr + (export_rva - vaddr); +# if (export_rva + export_size > vaddr + vsize) +# export_size = vsize - (export_rva - vaddr); +# break; +# } +# } +# +# expdata = (unsigned char*)malloc(export_size); +# lseek (dll, expptr, SEEK_SET); +# read (dll, expdata, export_size); +# erva = expdata - export_rva; +# +# nexp = pe_as32 (expdata+24); +# name_rvas = pe_as32 (expdata+32); +# +# printf ("EXPORTS\n"); +# for (i = 0; i> "${ofile}T" || (rm -f "${ofile}T"; exit 1) + + mv -f "${ofile}T" "$ofile" || \ + (rm -f "$ofile" && cp "${ofile}T" "$ofile" && rm -f "${ofile}T") + chmod +x "$ofile" +fi +## +## END FIXME + + + + + +# This can be used to rebuild libtool when needed +LIBTOOL_DEPS="$ac_aux_dir/ltmain.sh" + +# Always use our own libtool. +LIBTOOL='$(SHELL) $(top_builddir)/libtool' + +# Prevent multiple expansion + + + EXTLIB='la' + EXTOBJ='lo' + LIBTOOL='$(top)/libtool' + LIBTOOLCC='$(top)/libtool --mode=compile' + LIBTOOLLD='$(top)/libtool --mode=link' + CCOUTPUT='-c -o $@ $<' +else + +# Make sure we can run config.sub. +if ${CONFIG_SHELL-/bin/sh} $ac_config_sub sun4 >/dev/null 2>&1; then : +else { echo "configure: error: can not run $ac_config_sub" 1>&2; exit 1; } +fi + +echo $ac_n "checking host system type""... $ac_c" 1>&6 +echo "configure:5446: checking host system type" >&5 + +host_alias=$host +case "$host_alias" in +NONE) + case $nonopt in + NONE) + if host_alias=`${CONFIG_SHELL-/bin/sh} $ac_config_guess`; then : + else { echo "configure: error: can not guess host type; you must specify one" 1>&2; exit 1; } + fi ;; + *) host_alias=$nonopt ;; + esac ;; +esac + +host=`${CONFIG_SHELL-/bin/sh} $ac_config_sub $host_alias` +host_cpu=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\1/'` +host_vendor=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\2/'` +host_os=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'` +echo "$ac_t""$host" 1>&6 + + EXTLIB='a' + EXTOBJ='o' + LIBTOOL='' + LIBTOOLCC='' + LIBTOOLLD='' + if test x"$compiler_c_o" = xyes ; then + CCOUTPUT='-c -o $@ $<' + else + CCOUTPUT='-c $< && if test x"$(@F)" != x"$@" ; then mv $(@F) $@ ; fi' + fi + +fi + + + + + + + + +# Check whether --with-control-dir or --without-control-dir was given. +if test "${with_control_dir+set}" = set; then + withval="$with_control_dir" + CONTROLDIR=$with_control_dir +else + CONTROLDIR=$prefix/bin/control +fi + + +# Check whether --with-db-dir or --without-db-dir was given. +if test "${with_db_dir+set}" = set; then + withval="$with_db_dir" + DBDIR=$with_db_dir +else + DBDIR=$prefix/db +fi + + +# Check whether --with-doc-dir or --without-doc-dir was given. +if test "${with_doc_dir+set}" = set; then + withval="$with_doc_dir" + DOCDIR=$with_doc_dir +else + DOCDIR=$prefix/doc +fi + + +# Check whether --with-etc-dir or --without-etc-dir was given. +if test "${with_etc_dir+set}" = set; then + withval="$with_etc_dir" + ETCDIR=$with_etc_dir +else + ETCDIR=$prefix/etc +fi + + +# Check whether --with-filter-dir or --without-filter-dir was given. +if test "${with_filter_dir+set}" = set; then + withval="$with_filter_dir" + FILTERDIR=$with_filter_dir +else + FILTERDIR=$prefix/bin/filter +fi + + +# Check whether --with-lib-dir or --without-lib-dir was given. +if test "${with_lib_dir+set}" = set; then + withval="$with_lib_dir" + LIBDIR=$with_lib_dir +else + LIBDIR=$prefix/lib +fi + + +# Check whether --with-log-dir or --without-log-dir was given. +if test "${with_log_dir+set}" = set; then + withval="$with_log_dir" + LOGDIR=$with_log_dir +else + LOGDIR=$prefix/log +fi + + +# Check whether --with-run-dir or --without-run-dir was given. +if test "${with_run_dir+set}" = set; then + withval="$with_run_dir" + RUNDIR=$with_run_dir +else + RUNDIR=$prefix/run +fi + + +# Check whether --with-spool-dir or --without-spool-dir was given. +if test "${with_spool_dir+set}" = set; then + withval="$with_spool_dir" + SPOOLDIR=$with_spool_dir +else + SPOOLDIR=$prefix/spool +fi + + +# Check whether --with-tmp-dir or --without-tmp-dir was given. +if test "${with_tmp_dir+set}" = set; then + withval="$with_tmp_dir" + tmpdir=$with_tmp_dir +else + tmpdir=$prefix/tmp +fi + + + +# Check whether --with-syslog-facility or --without-syslog-facility was given. +if test "${with_syslog_facility+set}" = set; then + withval="$with_syslog_facility" + SYSLOG_FACILITY=$with_syslog_facility +else + SYSLOG_FACILITY=none +fi + + + + +# Check whether --with-news-user or --without-news-user was given. +if test "${with_news_user+set}" = set; then + withval="$with_news_user" + NEWSUSER=$with_news_user +else + NEWSUSER=news +fi + + +cat >> confdefs.h <> confdefs.h <> confdefs.h <&2; exit 1; } + fi + fi +fi + + + + + +cat >> confdefs.h <> confdefs.h <> confdefs.h <> confdefs.h <> confdefs.h <> confdefs.h <<\EOF +#define HAVE_INET6 1 +EOF + + fi +fi + + +# Check whether --with-max-sockets or --without-max-sockets was given. +if test "${with_max_sockets+set}" = set; then + withval="$with_max_sockets" + : +else + with_max_sockets=15 +fi + +cat >> confdefs.h <> confdefs.h <<\EOF +#define DO_TAGGED_HASH 1 +EOF + + else + DO_DBZ_TAGGED_HASH=DONT + fi +fi + + + +inn_enable_keywords=0 +# Check whether --enable-keywords or --disable-keywords was given. +if test "${enable_keywords+set}" = set; then + enableval="$enable_keywords" + if test x"$enableval" = xyes ; then + inn_enable_keywords=1 + fi +fi + +cat >> confdefs.h <&2; exit 1; } + fi ;; + no) inn_enable_largefiles=no ;; + *) { echo "configure: error: invalid argument to --enable-largefiles" 1>&2; exit 1; } ;; + esac +fi + + +# Check whether --with-sendmail or --without-sendmail was given. +if test "${with_sendmail+set}" = set; then + withval="$with_sendmail" + SENDMAIL=$with_sendmail +fi + + +# Check whether --with-kerberos or --without-kerberos was given. +if test "${with_kerberos+set}" = set; then + withval="$with_kerberos" + if test x"$with_kerberos" != xno ; then + KRB5_LDFLAGS="-L$with_kerberos/lib" + KRB5_INC="-I$with_kerberos/include" + fi +fi + + +# Check whether --with-perl or --without-perl was given. +if test "${with_perl+set}" = set; then + withval="$with_perl" + case "${withval}" in + yes) DO_PERL=DO + cat >> confdefs.h <<\EOF +#define DO_PERL 1 +EOF + + ;; + no) DO_PERL=DONT ;; + *) { echo "configure: error: invalid argument to --with-perl" 1>&2; exit 1; } ;; + esac +else + DO_PERL=DONT +fi + + +# Check whether --with-python or --without-python was given. +if test "${with_python+set}" = set; then + withval="$with_python" + case "${withval}" in + yes) DO_PYTHON=define + cat >> confdefs.h <<\EOF +#define DO_PYTHON 1 +EOF + + ;; + no) DO_PYTHON=DONT ;; + *) { echo "configure: error: invalid argument to --with-python" 1>&2; exit 1; } ;; + esac +else + DO_PYTHON=DONT +fi + + +HOSTNAME=`hostname 2> /dev/null || uname -n` + + +if test $ac_cv_prog_gcc = yes; then + echo $ac_n "checking whether ${CC-cc} needs -traditional""... $ac_c" 1>&6 +echo "configure:5848: checking whether ${CC-cc} needs -traditional" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_gcc_traditional'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_pattern="Autoconf.*'x'" + cat > conftest.$ac_ext < +Autoconf TIOCGETP +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "$ac_pattern" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_prog_gcc_traditional=yes +else + rm -rf conftest* + ac_cv_prog_gcc_traditional=no +fi +rm -f conftest* + + + if test $ac_cv_prog_gcc_traditional = no; then + cat > conftest.$ac_ext < +Autoconf TCGETA +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "$ac_pattern" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_prog_gcc_traditional=yes +fi +rm -f conftest* + + fi +fi + +echo "$ac_t""$ac_cv_prog_gcc_traditional" 1>&6 + if test $ac_cv_prog_gcc_traditional = yes; then + CC="$CC -traditional" + fi +fi + +# Extract the first word of "flex", so it can be a program name with args. +set dummy flex; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:5896: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_LEX'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$LEX"; then + ac_cv_prog_LEX="$LEX" # Let the user override the test. +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_prog_LEX="flex" + break + fi + done + IFS="$ac_save_ifs" + test -z "$ac_cv_prog_LEX" && ac_cv_prog_LEX="lex" +fi +fi +LEX="$ac_cv_prog_LEX" +if test -n "$LEX"; then + echo "$ac_t""$LEX" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test -z "$LEXLIB" +then + case "$LEX" in + flex*) ac_lib=fl ;; + *) ac_lib=l ;; + esac + echo $ac_n "checking for yywrap in -l$ac_lib""... $ac_c" 1>&6 +echo "configure:5930: checking for yywrap in -l$ac_lib" >&5 +ac_lib_var=`echo $ac_lib'_'yywrap | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-l$ac_lib $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + LEXLIB="-l$ac_lib" +else + echo "$ac_t""no" 1>&6 +fi + +fi + +echo $ac_n "checking whether ${MAKE-make} sets \${MAKE}""... $ac_c" 1>&6 +echo "configure:5972: checking whether ${MAKE-make} sets \${MAKE}" >&5 +set dummy ${MAKE-make}; ac_make=`echo "$2" | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_prog_make_${ac_make}_set'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftestmake <<\EOF +all: + @echo 'ac_maketemp="${MAKE}"' +EOF +# GNU make sometimes prints "make[1]: Entering...", which would confuse us. +eval `${MAKE-make} -f conftestmake 2>/dev/null | grep temp=` +if test -n "$ac_maketemp"; then + eval ac_cv_prog_make_${ac_make}_set=yes +else + eval ac_cv_prog_make_${ac_make}_set=no +fi +rm -f conftestmake +fi +if eval "test \"`echo '$ac_cv_prog_make_'${ac_make}_set`\" = yes"; then + echo "$ac_t""yes" 1>&6 + SET_MAKE= +else + echo "$ac_t""no" 1>&6 + SET_MAKE="MAKE=${MAKE-make}" +fi + +# Extract the first word of "ranlib", so it can be a program name with args. +set dummy ranlib; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6001: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_RANLIB'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$RANLIB"; then + ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test. +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_prog_RANLIB="ranlib" + break + fi + done + IFS="$ac_save_ifs" + test -z "$ac_cv_prog_RANLIB" && ac_cv_prog_RANLIB=":" +fi +fi +RANLIB="$ac_cv_prog_RANLIB" +if test -n "$RANLIB"; then + echo "$ac_t""$RANLIB" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +for ac_prog in 'bison -y' byacc +do +# Extract the first word of "$ac_prog", so it can be a program name with args. +set dummy $ac_prog; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6033: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_YACC'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$YACC"; then + ac_cv_prog_YACC="$YACC" # Let the user override the test. +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_prog_YACC="$ac_prog" + break + fi + done + IFS="$ac_save_ifs" +fi +fi +YACC="$ac_cv_prog_YACC" +if test -n "$YACC"; then + echo "$ac_t""$YACC" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +test -n "$YACC" && break +done +test -n "$YACC" || YACC="yacc" + + +case "$CPP" in +*-traditional-cpp*) + CFLAGS="-traditional-cpp $CFLAGS" + ;; +esac + +case "$host" in + +*hpux*) + if test x"$GCC" != xyes ; then + CFLAGS="$CFLAGS -Ae" + + case "$CFLAGS" in + *-g*) + LDFLAGS="$LDFLAGS -g" + ;; + esac + fi + ;; + +*darwin*) + LDFLAGS="$LDFLAGS -multiply_defined suppress" + ;; + +*UnixWare*|*unixware*|*-sco3*) + if test x"$GCC" != xyes ; then + CFLAGS="$CFLAGS -Kalloca" + fi +esac + + +# Extract the first word of "ctags", so it can be a program name with args. +set dummy ctags; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6098: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path_CTAGS'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$CTAGS" in + /*) + ac_cv_path_CTAGS="$CTAGS" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path_CTAGS="$CTAGS" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path_CTAGS="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + test -z "$ac_cv_path_CTAGS" && ac_cv_path_CTAGS="echo" + ;; +esac +fi +CTAGS="$ac_cv_path_CTAGS" +if test -n "$CTAGS"; then + echo "$ac_t""$CTAGS" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test x"$CTAGS" != xecho ; then + CTAGS="$CTAGS -t -w" +fi + + + +# Extract the first word of "awk", so it can be a program name with args. +set dummy awk; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6140: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path__PATH_AWK'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$_PATH_AWK" in + /*) + ac_cv_path__PATH_AWK="$_PATH_AWK" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path__PATH_AWK="$_PATH_AWK" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path__PATH_AWK="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +_PATH_AWK="$ac_cv_path__PATH_AWK" +if test -n "$_PATH_AWK"; then + echo "$ac_t""$_PATH_AWK" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test -z "${_PATH_AWK}" ; then + { echo "configure: error: awk was not found in path and is required" 1>&2; exit 1; } +fi +# Extract the first word of "egrep", so it can be a program name with args. +set dummy egrep; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6178: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path__PATH_EGREP'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$_PATH_EGREP" in + /*) + ac_cv_path__PATH_EGREP="$_PATH_EGREP" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path__PATH_EGREP="$_PATH_EGREP" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path__PATH_EGREP="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +_PATH_EGREP="$ac_cv_path__PATH_EGREP" +if test -n "$_PATH_EGREP"; then + echo "$ac_t""$_PATH_EGREP" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test -z "${_PATH_EGREP}" ; then + { echo "configure: error: egrep was not found in path and is required" 1>&2; exit 1; } +fi +# Extract the first word of "perl", so it can be a program name with args. +set dummy perl; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6216: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path__PATH_PERL'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$_PATH_PERL" in + /*) + ac_cv_path__PATH_PERL="$_PATH_PERL" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path__PATH_PERL="$_PATH_PERL" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path__PATH_PERL="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +_PATH_PERL="$ac_cv_path__PATH_PERL" +if test -n "$_PATH_PERL"; then + echo "$ac_t""$_PATH_PERL" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test -z "${_PATH_PERL}" ; then + { echo "configure: error: perl was not found in path and is required" 1>&2; exit 1; } +fi +# Extract the first word of "sh", so it can be a program name with args. +set dummy sh; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6254: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path__PATH_SH'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$_PATH_SH" in + /*) + ac_cv_path__PATH_SH="$_PATH_SH" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path__PATH_SH="$_PATH_SH" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path__PATH_SH="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +_PATH_SH="$ac_cv_path__PATH_SH" +if test -n "$_PATH_SH"; then + echo "$ac_t""$_PATH_SH" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test -z "${_PATH_SH}" ; then + { echo "configure: error: sh was not found in path and is required" 1>&2; exit 1; } +fi +# Extract the first word of "sed", so it can be a program name with args. +set dummy sed; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6292: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path__PATH_SED'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$_PATH_SED" in + /*) + ac_cv_path__PATH_SED="$_PATH_SED" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path__PATH_SED="$_PATH_SED" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path__PATH_SED="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +_PATH_SED="$ac_cv_path__PATH_SED" +if test -n "$_PATH_SED"; then + echo "$ac_t""$_PATH_SED" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test -z "${_PATH_SED}" ; then + { echo "configure: error: sed was not found in path and is required" 1>&2; exit 1; } +fi +# Extract the first word of "sort", so it can be a program name with args. +set dummy sort; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6330: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path__PATH_SORT'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$_PATH_SORT" in + /*) + ac_cv_path__PATH_SORT="$_PATH_SORT" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path__PATH_SORT="$_PATH_SORT" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path__PATH_SORT="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +_PATH_SORT="$ac_cv_path__PATH_SORT" +if test -n "$_PATH_SORT"; then + echo "$ac_t""$_PATH_SORT" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test -z "${_PATH_SORT}" ; then + { echo "configure: error: sort was not found in path and is required" 1>&2; exit 1; } +fi +for ac_prog in uux +do +# Extract the first word of "$ac_prog", so it can be a program name with args. +set dummy $ac_prog; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6370: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path__PATH_UUX'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$_PATH_UUX" in + /*) + ac_cv_path__PATH_UUX="$_PATH_UUX" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path__PATH_UUX="$_PATH_UUX" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path__PATH_UUX="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +_PATH_UUX="$ac_cv_path__PATH_UUX" +if test -n "$_PATH_UUX"; then + echo "$ac_t""$_PATH_UUX" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +test -n "$_PATH_UUX" && break +done +test -n "$_PATH_UUX" || _PATH_UUX="uux" + + +inn_perl_command='print $]' + + +echo $ac_n "checking for Perl version""... $ac_c" 1>&6 +echo "configure:6411: checking for Perl version" >&5 +if eval "test \"`echo '$''{'inn_cv_perl_version'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if $_PATH_PERL -e 'require 5.004_03;' > /dev/null 2>&1 ; then + inn_cv_perl_version=`$_PATH_PERL -e "$inn_perl_command"` +else + { echo "configure: error: Perl 5.004_03 or greater is required" 1>&2; exit 1; } +fi +fi + +echo "$ac_t""$inn_cv_perl_version" 1>&6 + +pgpverify=true +for ac_prog in gpgv +do +# Extract the first word of "$ac_prog", so it can be a program name with args. +set dummy $ac_prog; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6430: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path_PATH_GPGV'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$PATH_GPGV" in + /*) + ac_cv_path_PATH_GPGV="$PATH_GPGV" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path_PATH_GPGV="$PATH_GPGV" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path_PATH_GPGV="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +PATH_GPGV="$ac_cv_path_PATH_GPGV" +if test -n "$PATH_GPGV"; then + echo "$ac_t""$PATH_GPGV" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +test -n "$PATH_GPGV" && break +done + +for ac_prog in pgpv pgp pgpgpg +do +# Extract the first word of "$ac_prog", so it can be a program name with args. +set dummy $ac_prog; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6470: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path__PATH_PGP'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$_PATH_PGP" in + /*) + ac_cv_path__PATH_PGP="$_PATH_PGP" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path__PATH_PGP="$_PATH_PGP" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path__PATH_PGP="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +_PATH_PGP="$ac_cv_path__PATH_PGP" +if test -n "$_PATH_PGP"; then + echo "$ac_t""$_PATH_PGP" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +test -n "$_PATH_PGP" && break +done + +if test -z "$_PATH_PGP" && test -z "$PATH_GPGV" ; then + pgpverify=false +fi + + +for ac_prog in wget ncftpget ncftp +do +# Extract the first word of "$ac_prog", so it can be a program name with args. +set dummy $ac_prog; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6515: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path_GETFTP'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$GETFTP" in + /*) + ac_cv_path_GETFTP="$GETFTP" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path_GETFTP="$GETFTP" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path_GETFTP="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +GETFTP="$ac_cv_path_GETFTP" +if test -n "$GETFTP"; then + echo "$ac_t""$GETFTP" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +test -n "$GETFTP" && break +done +test -n "$GETFTP" || GETFTP="$prefix/bin/simpleftp" + + +case "$LOG_COMPRESS" in +compress|gzip) ;; +*) # Extract the first word of ""$LOG_COMPRESS"", so it can be a program name with args. +set dummy "$LOG_COMPRESS"; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6557: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path_LOG_COMPRESS'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$LOG_COMPRESS" in + /*) + ac_cv_path_LOG_COMPRESS="$LOG_COMPRESS" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path_LOG_COMPRESS="$LOG_COMPRESS" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path_LOG_COMPRESS="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +LOG_COMPRESS="$ac_cv_path_LOG_COMPRESS" +if test -n "$LOG_COMPRESS"; then + echo "$ac_t""$LOG_COMPRESS" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test -z "${LOG_COMPRESS}" ; then + { echo "configure: error: "$LOG_COMPRESS" was not found in path and is required" 1>&2; exit 1; } +fi +esac +# Extract the first word of "compress", so it can be a program name with args. +set dummy compress; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6596: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path_COMPRESS'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$COMPRESS" in + /*) + ac_cv_path_COMPRESS="$COMPRESS" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path_COMPRESS="$COMPRESS" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path_COMPRESS="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + test -z "$ac_cv_path_COMPRESS" && ac_cv_path_COMPRESS="compress" + ;; +esac +fi +COMPRESS="$ac_cv_path_COMPRESS" +if test -n "$COMPRESS"; then + echo "$ac_t""$COMPRESS" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test x"$LOG_COMPRESS" = xcompress ; then + if test x"$COMPRESS" = xcompress ; then + { echo "configure: error: compress not found but specified for log compression" 1>&2; exit 1; } + fi + LOG_COMPRESS="$COMPRESS" +fi +# Extract the first word of "gzip", so it can be a program name with args. +set dummy gzip; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6638: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path_GZIP'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$GZIP" in + /*) + ac_cv_path_GZIP="$GZIP" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path_GZIP="$GZIP" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path_GZIP="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + test -z "$ac_cv_path_GZIP" && ac_cv_path_GZIP="gzip" + ;; +esac +fi +GZIP="$ac_cv_path_GZIP" +if test -n "$GZIP"; then + echo "$ac_t""$GZIP" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test x"$LOG_COMPRESS" = xgzip ; then + if test x"$GZIP" = xgzip ; then + { echo "configure: error: gzip not found but specified for log compression" 1>&2; exit 1; } + fi + LOG_COMPRESS="$GZIP" +fi + +if test x"$COMPRESS" = xcompress && test x"$GZIP" != xgzip ; then + UNCOMPRESS="$GZIP -d" +else + UNCOMPRESS="$COMPRESS -d" +fi + + +if test "${with_sendmail+set}" = set ; then + echo $ac_n "checking for sendmail""... $ac_c" 1>&6 +echo "configure:6687: checking for sendmail" >&5 + echo "$ac_t""$SENDMAIL" 1>&6 +else + # Extract the first word of "sendmail", so it can be a program name with args. +set dummy sendmail; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6693: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path_SENDMAIL'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$SENDMAIL" in + /*) + ac_cv_path_SENDMAIL="$SENDMAIL" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path_SENDMAIL="$SENDMAIL" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy=""/usr/sbin:/usr/lib"" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path_SENDMAIL="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +SENDMAIL="$ac_cv_path_SENDMAIL" +if test -n "$SENDMAIL"; then + echo "$ac_t""$SENDMAIL" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + + if test -z "$SENDMAIL" ; then + { echo "configure: error: sendmail not found" 1>&2; exit 1; } + fi +fi + +# Extract the first word of "uustat", so it can be a program name with args. +set dummy uustat; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6733: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_prog_HAVE_UUSTAT'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test -n "$HAVE_UUSTAT"; then + ac_cv_prog_HAVE_UUSTAT="$HAVE_UUSTAT" # Let the user override the test. +else + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_prog_HAVE_UUSTAT="DO" + break + fi + done + IFS="$ac_save_ifs" + test -z "$ac_cv_prog_HAVE_UUSTAT" && ac_cv_prog_HAVE_UUSTAT="DONT" +fi +fi +HAVE_UUSTAT="$ac_cv_prog_HAVE_UUSTAT" +if test -n "$HAVE_UUSTAT"; then + echo "$ac_t""$HAVE_UUSTAT" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + + + +if test x"$DO_PYTHON" = xdefine ; then + # Extract the first word of "python", so it can be a program name with args. +set dummy python; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:6766: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path__PATH_PYTHON'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$_PATH_PYTHON" in + /*) + ac_cv_path__PATH_PYTHON="$_PATH_PYTHON" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path__PATH_PYTHON="$_PATH_PYTHON" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path__PATH_PYTHON="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +_PATH_PYTHON="$ac_cv_path__PATH_PYTHON" +if test -n "$_PATH_PYTHON"; then + echo "$ac_t""$_PATH_PYTHON" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + +if test -z "${_PATH_PYTHON}" ; then + { echo "configure: error: python was not found in path and is required" 1>&2; exit 1; } +fi +fi + + + + + +echo $ac_n "checking for library containing setproctitle""... $ac_c" 1>&6 +echo "configure:6808: checking for library containing setproctitle" >&5 +if eval "test \"`echo '$''{'ac_cv_search_setproctitle'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_func_search_save_LIBS="$LIBS" +ac_cv_search_setproctitle="no" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_setproctitle="none required" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +test "$ac_cv_search_setproctitle" = "no" && for i in util; do +LIBS="-l$i $ac_func_search_save_LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_setproctitle="-l$i" +break +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +done +LIBS="$ac_func_search_save_LIBS" +fi + +echo "$ac_t""$ac_cv_search_setproctitle" 1>&6 +if test "$ac_cv_search_setproctitle" != "no"; then + test "$ac_cv_search_setproctitle" = "none required" || LIBS="$ac_cv_search_setproctitle $LIBS" + cat >> confdefs.h <<\EOF +#define HAVE_SETPROCTITLE 1 +EOF + +else : + LIBOBJS="$LIBOBJS setproctitle.o" + for ac_func in pstat +do +echo $ac_n "checking for $ac_func""... $ac_c" 1>&6 +echo "configure:6873: checking for $ac_func" >&5 +if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +/* Override any gcc2 internal prototype to avoid an error. */ +/* We use char because int might match the return type of a gcc2 + builtin and then its argument prototype would still apply. */ +char $ac_func(); + +int main() { + +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined (__stub_$ac_func) || defined (__stub___$ac_func) +choke me +#else +$ac_func(); +#endif + +; return 0; } +EOF +if { (eval echo configure:6901: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_func_$ac_func=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_func_$ac_func=no" +fi +rm -f conftest* +fi + +if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` + cat >> confdefs.h <&6 +fi +done + +fi + + +echo $ac_n "checking for library containing gethostbyname""... $ac_c" 1>&6 +echo "configure:6929: checking for library containing gethostbyname" >&5 +if eval "test \"`echo '$''{'ac_cv_search_gethostbyname'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_func_search_save_LIBS="$LIBS" +ac_cv_search_gethostbyname="no" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_gethostbyname="none required" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +test "$ac_cv_search_gethostbyname" = "no" && for i in nsl; do +LIBS="-l$i $ac_func_search_save_LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_gethostbyname="-l$i" +break +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +done +LIBS="$ac_func_search_save_LIBS" +fi + +echo "$ac_t""$ac_cv_search_gethostbyname" 1>&6 +if test "$ac_cv_search_gethostbyname" != "no"; then + test "$ac_cv_search_gethostbyname" = "none required" || LIBS="$ac_cv_search_gethostbyname $LIBS" + +else : + +fi + +echo $ac_n "checking for library containing socket""... $ac_c" 1>&6 +echo "configure:6991: checking for library containing socket" >&5 +if eval "test \"`echo '$''{'ac_cv_search_socket'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_func_search_save_LIBS="$LIBS" +ac_cv_search_socket="no" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_socket="none required" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +test "$ac_cv_search_socket" = "no" && for i in socket; do +LIBS="-l$i $ac_func_search_save_LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_socket="-l$i" +break +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +done +LIBS="$ac_func_search_save_LIBS" +fi + +echo "$ac_t""$ac_cv_search_socket" 1>&6 +if test "$ac_cv_search_socket" != "no"; then + test "$ac_cv_search_socket" = "none required" || LIBS="$ac_cv_search_socket $LIBS" + +else : + echo $ac_n "checking for socket in -lnsl""... $ac_c" 1>&6 +echo "configure:7050: checking for socket in -lnsl" >&5 +ac_lib_var=`echo nsl'_'socket | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-lnsl -lsocket $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + LIBS="$LIBS -lsocket -lnsl" +else + echo "$ac_t""no" 1>&6 +fi + +fi + + +echo $ac_n "checking for library containing inet_aton""... $ac_c" 1>&6 +echo "configure:7093: checking for library containing inet_aton" >&5 +if eval "test \"`echo '$''{'ac_cv_search_inet_aton'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_func_search_save_LIBS="$LIBS" +ac_cv_search_inet_aton="no" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_inet_aton="none required" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +test "$ac_cv_search_inet_aton" = "no" && for i in resolv; do +LIBS="-l$i $ac_func_search_save_LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_inet_aton="-l$i" +break +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +done +LIBS="$ac_func_search_save_LIBS" +fi + +echo "$ac_t""$ac_cv_search_inet_aton" 1>&6 +if test "$ac_cv_search_inet_aton" != "no"; then + test "$ac_cv_search_inet_aton" = "none required" || LIBS="$ac_cv_search_inet_aton $LIBS" + +else : + +fi + +inn_save_LIBS=$LIBS +LIBS=${CRYPT_LIB} + +echo $ac_n "checking for library containing crypt""... $ac_c" 1>&6 +echo "configure:7158: checking for library containing crypt" >&5 +if eval "test \"`echo '$''{'ac_cv_search_crypt'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_func_search_save_LIBS="$LIBS" +ac_cv_search_crypt="no" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_crypt="none required" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +test "$ac_cv_search_crypt" = "no" && for i in crypt; do +LIBS="-l$i $ac_func_search_save_LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_crypt="-l$i" +break +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +done +LIBS="$ac_func_search_save_LIBS" +fi + +echo "$ac_t""$ac_cv_search_crypt" 1>&6 +if test "$ac_cv_search_crypt" != "no"; then + test "$ac_cv_search_crypt" = "none required" || LIBS="$ac_cv_search_crypt $LIBS" + CRYPT_LIB=$LIBS + +else : + +fi +LIBS=$inn_save_LIBS + +inn_save_LIBS=$LIBS +LIBS=${SHADOW_LIB} + +echo $ac_n "checking for library containing getspnam""... $ac_c" 1>&6 +echo "configure:7225: checking for library containing getspnam" >&5 +if eval "test \"`echo '$''{'ac_cv_search_getspnam'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_func_search_save_LIBS="$LIBS" +ac_cv_search_getspnam="no" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_getspnam="none required" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +test "$ac_cv_search_getspnam" = "no" && for i in shadow; do +LIBS="-l$i $ac_func_search_save_LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_getspnam="-l$i" +break +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +done +LIBS="$ac_func_search_save_LIBS" +fi + +echo "$ac_t""$ac_cv_search_getspnam" 1>&6 +if test "$ac_cv_search_getspnam" != "no"; then + test "$ac_cv_search_getspnam" = "none required" || LIBS="$ac_cv_search_getspnam $LIBS" + SHADOW_LIB=$LIBS + +else : + +fi +LIBS=$inn_save_LIBS + + +inn_check_pam=1 +for ac_hdr in pam/pam_appl.h +do +ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'` +echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6 +echo "configure:7294: checking for $ac_hdr" >&5 +if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:7304: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + rm -rf conftest* + eval "ac_cv_header_$ac_safe=yes" +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_header_$ac_safe=no" +fi +rm -f conftest* +fi +if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'` + cat >> confdefs.h <&6 +ac_safe=`echo "security/pam_appl.h" | sed 'y%./+-%__p_%'` +echo $ac_n "checking for security/pam_appl.h""... $ac_c" 1>&6 +echo "configure:7329: checking for security/pam_appl.h" >&5 +if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:7339: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + rm -rf conftest* + eval "ac_cv_header_$ac_safe=yes" +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_header_$ac_safe=no" +fi +rm -f conftest* +fi +if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then + echo "$ac_t""yes" 1>&6 + : +else + echo "$ac_t""no" 1>&6 +inn_check_pam=0 +fi + +fi +done + +if test x"$inn_check_pam" = x1; then + inn_save_LIBS=$LIBS +LIBS=${PAM_LIB} + +echo $ac_n "checking for library containing pam_start""... $ac_c" 1>&6 +echo "configure:7369: checking for library containing pam_start" >&5 +if eval "test \"`echo '$''{'ac_cv_search_pam_start'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_func_search_save_LIBS="$LIBS" +ac_cv_search_pam_start="no" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_pam_start="none required" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +test "$ac_cv_search_pam_start" = "no" && for i in pam; do +LIBS="-l$i $ac_func_search_save_LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_pam_start="-l$i" +break +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +done +LIBS="$ac_func_search_save_LIBS" +fi + +echo "$ac_t""$ac_cv_search_pam_start" 1>&6 +if test "$ac_cv_search_pam_start" != "no"; then + test "$ac_cv_search_pam_start" = "none required" || LIBS="$ac_cv_search_pam_start $LIBS" + PAM_LIB=$LIBS + cat >> confdefs.h <<\EOF +#define HAVE_PAM 1 +EOF + +else : + +fi +LIBS=$inn_save_LIBS + +fi + +if test x"$inn_enable_keywords" = x1 ; then + inn_save_LIBS=$LIBS +LIBS=${REGEX_LIB} + +echo $ac_n "checking for library containing regexec""... $ac_c" 1>&6 +echo "configure:7442: checking for library containing regexec" >&5 +if eval "test \"`echo '$''{'ac_cv_search_regexec'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_func_search_save_LIBS="$LIBS" +ac_cv_search_regexec="no" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_regexec="none required" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +test "$ac_cv_search_regexec" = "no" && for i in regex; do +LIBS="-l$i $ac_func_search_save_LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_regexec="-l$i" +break +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +done +LIBS="$ac_func_search_save_LIBS" +fi + +echo "$ac_t""$ac_cv_search_regexec" 1>&6 +if test "$ac_cv_search_regexec" != "no"; then + test "$ac_cv_search_regexec" = "none required" || LIBS="$ac_cv_search_regexec $LIBS" + REGEX_LIB=$LIBS + +else : + { echo "configure: error: no usable regular expression library found" 1>&2; exit 1; } +fi +LIBS=$inn_save_LIBS + +fi + + +# Check whether --with-berkeleydb or --without-berkeleydb was given. +if test "${with_berkeleydb+set}" = set; then + withval="$with_berkeleydb" + BERKELEY_DB_DIR=$with_berkeleydb +else + BERKELEY_DB_DIR=no +fi + +echo $ac_n "checking if BerkeleyDB is desired""... $ac_c" 1>&6 +echo "configure:7517: checking if BerkeleyDB is desired" >&5 +if test x"$BERKELEY_DB_DIR" = xno ; then + echo "$ac_t""no" 1>&6 + BERKELEY_DB_LDFLAGS= + BERKELEY_DB_CFLAGS= + BERKELEY_DB_LIB= +else + echo "$ac_t""yes" 1>&6 + echo $ac_n "checking for BerkeleyDB location""... $ac_c" 1>&6 +echo "configure:7526: checking for BerkeleyDB location" >&5 + if test x"$BERKELEY_DB_DIR" = xyes ; then + for v in BerkeleyDB BerkeleyDB.3.0 BerkeleyDB.3.1 BerkeleyDB.3.2 \ + BerkeleyDB.3.3 BerkeleyDB.4.0 BerkeleyDB.4.1 BerkeleyDB.4.2 \ + BerkeleyDB.4.3 BerkeleyDB.4.4 BerkeleyDB.4.5 BerkeleyDB.4.6; do + for d in $prefix /usr/local /opt /usr ; do + if test -d "$d/$v" ; then + BERKELEY_DB_DIR="$d/$v" + break + fi + done + done + fi + if test x"$BERKELEY_DB_DIR" = xyes ; then + for v in db46 db45 db44 db43 db42 db41 db4 db3 db2 ; do + if test -d "/usr/local/include/$v" ; then + BERKELEY_DB_LDFLAGS="-L/usr/local/lib" + BERKELEY_DB_CFLAGS="-I/usr/local/include/$v" + BERKELEY_DB_LIB="-l$v" + echo "$ac_t""FreeBSD locations" 1>&6 + break + fi + done + if test x"$BERKELEY_DB_LIB" = x ; then + for v in db44 db43 db42 db41 db4 db3 db2 ; do + if test -d "/usr/include/$v" ; then + BERKELEY_DB_CFLAGS="-I/usr/include/$v" + BERKELEY_DB_LIB="-l$v" + echo "$ac_t""Linux locations" 1>&6 + break + fi + done + if test x"$BERKELEY_DB_LIB" = x ; then + BERKELEY_DB_LIB=-ldb + echo "$ac_t""trying -ldb" 1>&6 + fi + fi + else + BERKELEY_DB_LDFLAGS="-L$BERKELEY_DB_DIR/lib" + BERKELEY_DB_CFLAGS="-I$BERKELEY_DB_DIR/include" + BERKELEY_DB_LIB="-ldb" + echo "$ac_t""$BERKELEY_DB_DIR" 1>&6 + fi + cat >> confdefs.h <<\EOF +#define USE_BERKELEY_DB 1 +EOF + +fi + + + + +if test x"$BERKELEY_DB_LIB" != x ; then + DBM_INC="$BERKELEY_DB_CFLAGS" + DBM_LIB="$BERKELEY_DB_LDFLAGS $BERKELEY_DB_LIB" + + cat >> confdefs.h <<\EOF +#define HAVE_BDB_DBM 1 +EOF + +else + inn_save_LIBS=$LIBS +LIBS=${DBM_LIB} + +echo $ac_n "checking for library containing dbm_open""... $ac_c" 1>&6 +echo "configure:7591: checking for library containing dbm_open" >&5 +if eval "test \"`echo '$''{'ac_cv_search_dbm_open'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_func_search_save_LIBS="$LIBS" +ac_cv_search_dbm_open="no" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_dbm_open="none required" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +test "$ac_cv_search_dbm_open" = "no" && for i in ndbm dbm; do +LIBS="-l$i $ac_func_search_save_LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_dbm_open="-l$i" +break +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +done +LIBS="$ac_func_search_save_LIBS" +fi + +echo "$ac_t""$ac_cv_search_dbm_open" 1>&6 +if test "$ac_cv_search_dbm_open" != "no"; then + test "$ac_cv_search_dbm_open" = "none required" || LIBS="$ac_cv_search_dbm_open $LIBS" + DBM_LIB=$LIBS + cat >> confdefs.h <<\EOF +#define HAVE_DBM 1 +EOF + +else : + +fi +LIBS=$inn_save_LIBS + + DBM_INC= +fi + + + +# Check whether --with-openssl or --without-openssl was given. +if test "${with_openssl+set}" = set; then + withval="$with_openssl" + OPENSSL_DIR=$with_openssl +else + OPENSSL_DIR=no +fi + +echo $ac_n "checking if OpenSSL is desired""... $ac_c" 1>&6 +echo "configure:7671: checking if OpenSSL is desired" >&5 +if test x"$OPENSSL_DIR" = xno ; then + echo "$ac_t""no" 1>&6 + SSL_BIN= + SSL_INC= + SSL_LIB= +else + echo "$ac_t""yes" 1>&6 + echo $ac_n "checking for OpenSSL location""... $ac_c" 1>&6 +echo "configure:7680: checking for OpenSSL location" >&5 + if test x"$OPENSSL_DIR" = xyes ; then + for dir in $prefix /usr/local/ssl /usr/lib/ssl /usr/ssl /usr/pkg \ + /usr/local /usr ; do + if test -f "$dir/include/openssl/ssl.h" ; then + OPENSSL_DIR=$dir + break + fi + done + fi + if test x"$OPENSSL_DIR" = xyes ; then + { echo "configure: error: Can not find OpenSSL" 1>&2; exit 1; } + else + echo "$ac_t""$OPENSSL_DIR" 1>&6 + SSL_BIN="${OPENSSL_DIR}/bin" + SSL_INC="-I${OPENSSL_DIR}/include" + + # This is mildly tricky. In order to satisfy most linkers, libraries + # have to be listed in the right order, which means that libraries + # with dependencies on other libraries need to be listed first. But + # the -L flag for the OpenSSL library directory needs to go first of + # all. So put the -L flag into LIBS and accumulate actual libraries + # into SSL_LIB, and then at the end, restore LIBS and move -L to the + # beginning of SSL_LIB. + inn_save_LIBS=$LIBS + LIBS="$LIBS -L${OPENSSL_DIR}/lib" + SSL_LIB='' + echo $ac_n "checking for RSAPublicEncrypt in -lrsaref""... $ac_c" 1>&6 +echo "configure:7708: checking for RSAPublicEncrypt in -lrsaref" >&5 +ac_lib_var=`echo rsaref'_'RSAPublicEncrypt | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-lrsaref $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + echo $ac_n "checking for RSAPublicEncrypt in -lRSAglue""... $ac_c" 1>&6 +echo "configure:7743: checking for RSAPublicEncrypt in -lRSAglue" >&5 +ac_lib_var=`echo RSAglue'_'RSAPublicEncrypt | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-lRSAglue -lrsaref $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + SSL_LIB="-lRSAglue -lrsaref" +else + echo "$ac_t""no" 1>&6 +fi + +else + echo "$ac_t""no" 1>&6 +fi + + echo $ac_n "checking for BIO_new in -lcrypto""... $ac_c" 1>&6 +echo "configure:7787: checking for BIO_new in -lcrypto" >&5 +ac_lib_var=`echo crypto'_'BIO_new | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-lcrypto $SSL_LIB $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + echo $ac_n "checking for DSO_load in -ldl""... $ac_c" 1>&6 +echo "configure:7822: checking for DSO_load in -ldl" >&5 +ac_lib_var=`echo dl'_'DSO_load | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-ldl -lcrypto -ldl $SSL_LIB $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + SSL_LIB="-lcrypto -ldl $SSL_LIB" +else + echo "$ac_t""no" 1>&6 +SSL_LIB="-lcrypto $SSL_LIB" +fi + +else + echo "$ac_t""no" 1>&6 +{ echo "configure: error: Can not find OpenSSL" 1>&2; exit 1; } +fi + + echo $ac_n "checking for SSL_library_init in -lssl""... $ac_c" 1>&6 +echo "configure:7868: checking for SSL_library_init in -lssl" >&5 +ac_lib_var=`echo ssl'_'SSL_library_init | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-lssl $SSL_LIB $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + SSL_LIB="-lssl $SSL_LIB" +else + echo "$ac_t""no" 1>&6 +{ echo "configure: error: Can not find OpenSSL" 1>&2; exit 1; } +fi + + SSL_LIB="-L${OPENSSL_DIR}/lib $SSL_LIB" + LIBS=$inn_save_LIBS + cat >> confdefs.h <<\EOF +#define HAVE_SSL 1 +EOF + + fi +fi + + + + + +# Check whether --with-sasl or --without-sasl was given. +if test "${with_sasl+set}" = set; then + withval="$with_sasl" + SASL_DIR=$with_sasl +else + SASL_DIR=no +fi + +echo $ac_n "checking if SASL is desired""... $ac_c" 1>&6 +echo "configure:7930: checking if SASL is desired" >&5 +if test x"$SASL_DIR" = xno ; then + echo "$ac_t""no" 1>&6 + SASL_INC= + SASL_LIB= +else + echo "$ac_t""yes" 1>&6 + echo $ac_n "checking for SASL location""... $ac_c" 1>&6 +echo "configure:7938: checking for SASL location" >&5 + if test x"$SASL_DIR" = xyes ; then + for dir in $prefix /usr/local/sasl /usr/sasl /usr/pkg /usr/local ; do + if test -f "$dir/include/sasl/sasl.h" ; then + SASL_DIR=$dir + break + fi + done + fi + if test x"$SASL_DIR" = xyes ; then + if test -f "/usr/include/sasl/sasl.h" ; then + SASL_INC=-I/usr/include/sasl + SASL_DIR=/usr + echo "$ac_t""$SASL_DIR" 1>&6 + inn_save_LIBS=$LIBS + echo $ac_n "checking for sasl_getprop in -lsasl2""... $ac_c" 1>&6 +echo "configure:7954: checking for sasl_getprop in -lsasl2" >&5 +ac_lib_var=`echo sasl2'_'sasl_getprop | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-lsasl2 $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + SASL_LIB=-lsasl2 +else + echo "$ac_t""no" 1>&6 +{ echo "configure: error: Can not find SASL" 1>&2; exit 1; } +fi + + LIBS=$inn_save_LIBS + cat >> confdefs.h <<\EOF +#define HAVE_SASL 1 +EOF + + else + { echo "configure: error: Can not find SASL" 1>&2; exit 1; } + fi + else + echo "$ac_t""$SASL_DIR" 1>&6 + SASL_INC="-I${SASL_DIR}/include" + + inn_save_LIBS=$LIBS + LIBS="$LIBS -L${SASL_DIR}/lib" + echo $ac_n "checking for sasl_getprop in -lsasl2""... $ac_c" 1>&6 +echo "configure:8009: checking for sasl_getprop in -lsasl2" >&5 +ac_lib_var=`echo sasl2'_'sasl_getprop | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-lsasl2 $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + SASL_LIB="-L${SASL_DIR}/lib -lsasl2" +else + echo "$ac_t""no" 1>&6 +{ echo "configure: error: Can not find SASL" 1>&2; exit 1; } +fi + + LIBS=$inn_save_LIBS + cat >> confdefs.h <<\EOF +#define HAVE_SASL 1 +EOF + + fi +fi + + +if test x"${KRB5_INC}" != x; then +inn_save_LIBS=$LIBS +LIBS=${KRB5_LIB} + +echo $ac_n "checking for library containing krb5_parse_name""... $ac_c" 1>&6 +echo "configure:8063: checking for library containing krb5_parse_name" >&5 +if eval "test \"`echo '$''{'ac_cv_search_krb5_parse_name'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_func_search_save_LIBS="$LIBS" +ac_cv_search_krb5_parse_name="no" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_krb5_parse_name="none required" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +test "$ac_cv_search_krb5_parse_name" = "no" && for i in krb5; do +LIBS="-l$i $LIBS -lk5crypto -lcom_err $ac_func_search_save_LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + ac_cv_search_krb5_parse_name="-l$i" +break +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +done +LIBS="$ac_func_search_save_LIBS" +fi + +echo "$ac_t""$ac_cv_search_krb5_parse_name" 1>&6 +if test "$ac_cv_search_krb5_parse_name" != "no"; then + test "$ac_cv_search_krb5_parse_name" = "none required" || LIBS="$ac_cv_search_krb5_parse_name $LIBS" + KRB5_LIB=$LIBS + KRB5_AUTH="auth_krb5" + KRB5_LIB="$KRB5_LDFLAGS $KRB5_LIB -lk5crypto -lcom_err" + + + for ac_hdr in et/com_err.h +do +ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'` +echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6 +echo "configure:8128: checking for $ac_hdr" >&5 +if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:8138: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + rm -rf conftest* + eval "ac_cv_header_$ac_safe=yes" +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_header_$ac_safe=no" +fi +rm -f conftest* +fi +if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'` + cat >> confdefs.h <&6 +fi +done + +else : + +fi +LIBS=$inn_save_LIBS + + +inn_save_LIBS=$LIBS +LIBS=$KRB5_LIB +for ac_func in krb5_init_ets +do +echo $ac_n "checking for $ac_func""... $ac_c" 1>&6 +echo "configure:8175: checking for $ac_func" >&5 +if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +/* Override any gcc2 internal prototype to avoid an error. */ +/* We use char because int might match the return type of a gcc2 + builtin and then its argument prototype would still apply. */ +char $ac_func(); + +int main() { + +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined (__stub_$ac_func) || defined (__stub___$ac_func) +choke me +#else +$ac_func(); +#endif + +; return 0; } +EOF +if { (eval echo configure:8203: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_func_$ac_func=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_func_$ac_func=no" +fi +rm -f conftest* +fi + +if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` + cat >> confdefs.h <&6 +fi +done +fi # test x"${KRB5_INC}" != x; + +LIBS=$inn_save_LIBS + +if test x"$DO_PERL" = xDO ; then + echo $ac_n "checking for Perl linkage""... $ac_c" 1>&6 +echo "configure:8231: checking for Perl linkage" >&5 + inn_perl_core_path=`$_PATH_PERL -MConfig -e 'print $Config{archlibexp}'` + inn_perl_core_flags=`$_PATH_PERL -MExtUtils::Embed -e ccopts` + inn_perl_core_libs=`$_PATH_PERL -MExtUtils::Embed -e ldopts 2>&1 | tail -1` + inn_perl_core_libs=" $inn_perl_core_libs " + inn_perl_core_libs=`echo "$inn_perl_core_libs" | sed 's/ -lc / /'` + for i in $LIBS ; do + inn_perl_core_libs=`echo "$inn_perl_core_libs" | sed "s/ $i / /"` + done + case $host in + *-linux*) + inn_perl_core_libs=`echo "$inn_perl_core_libs" | sed 's/ -lgdbm / /'` + ;; + *-cygwin*) + inn_perl_libname=`$_PATH_PERL -MConfig -e 'print $Config{libperl}'` + inn_perl_libname=`echo "$inn_perl_libname" | sed 's/^lib//; s/\.a$//'` + inn_perl_core_libs="${inn_perl_core_libs}-l$inn_perl_libname" + ;; + esac + inn_perl_core_libs=`echo "$inn_perl_core_libs" | sed 's/^ *//'` + inn_perl_core_libs=`echo "$inn_perl_core_libs" | sed 's/ *$//'` + inn_perl_core_flags=" $inn_perl_core_flags " + if test x"$inn_enable_largefiles" != xyes ; then + for f in -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES ; do + inn_perl_core_flags=`echo "$inn_perl_core_flags" | sed "s/ $f / /"` + done + fi + inn_perl_core_flags=`echo "$inn_perl_core_flags" | sed 's/^ *//'` + inn_perl_core_flags=`echo "$inn_perl_core_flags" | sed 's/ *$//'` + PERL_INC="$inn_perl_core_flags" + PERL_LIB="$inn_perl_core_libs" + echo "$ac_t""$inn_perl_core_path" 1>&6 +else + PERL_INC='' + PERL_LIB='' +fi + + + +if test x"$DO_PYTHON" = xdefine ; then + echo $ac_n "checking for Python linkage""... $ac_c" 1>&6 +echo "configure:8272: checking for Python linkage" >&5 + py_prefix=`$_PATH_PYTHON -c 'import sys; print sys.prefix'` + py_ver=`$_PATH_PYTHON -c 'import sys; print sys.version[:3]'` + py_libdir="${py_prefix}/lib/python${py_ver}" + PYTHON_INC="-I${py_prefix}/include/python${py_ver}" + py_linkage="" + for py_linkpart in LIBS LIBC LIBM LOCALMODLIBS BASEMODLIBS \ + LINKFORSHARED LDFLAGS ; do + py_linkage="$py_linkage "`grep "^${py_linkpart}=" \ + $py_libdir/config/Makefile \ + | sed -e 's/^.*=//'` + done + PYTHON_LIB="-L$py_libdir/config -lpython$py_ver $py_linkage" + PYTHON_LIB=`echo $PYTHON_LIB | sed -e 's/ \\t*/ /g'` + echo "$ac_t""$py_libdir" 1>&6 +else + PYTHON_LIB="" + PYTHON_INC="" +fi + + + +if test x"$inn_enable_largefiles" = xyes ; then + echo $ac_n "checking for largefile linkage""... $ac_c" 1>&6 +echo "configure:8296: checking for largefile linkage" >&5 + case "$host" in + *-aix4.01*) + echo "$ac_t""no" 1>&6 + { echo "configure: error: AIX before 4.2 does not support large files" 1>&2; exit 1; } + ;; + *-aix4*) + echo "$ac_t""ok" 1>&6 + LFS_CFLAGS="-D_LARGE_FILES" + LFS_LDFLAGS="" + LFS_LIBS="" + ;; + *-hpux*) + echo "$ac_t""ok" 1>&6 + LFS_CFLAGS="-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64" + LFS_LDFLAGS="" + LFS_LIBS="" + ;; + *-irix*) + echo "$ac_t""no" 1>&6 + { echo "configure: error: Large files not supported on this platform" 1>&2; exit 1; } + ;; + *-linux*) + echo "$ac_t""maybe" 1>&6 + LFS_CFLAGS="-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64" + LFS_LDFLAGS="" + LFS_LIBS="" + cat >> confdefs.h <<\EOF +#define _GNU_SOURCE 1 +EOF + + ;; + *-solaris*) + echo "$ac_t""ok" 1>&6 + # Extract the first word of "getconf", so it can be a program name with args. +set dummy getconf; ac_word=$2 +echo $ac_n "checking for $ac_word""... $ac_c" 1>&6 +echo "configure:8333: checking for $ac_word" >&5 +if eval "test \"`echo '$''{'ac_cv_path_GETCONF'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + case "$GETCONF" in + /*) + ac_cv_path_GETCONF="$GETCONF" # Let the user override the test with a path. + ;; + ?:/*) + ac_cv_path_GETCONF="$GETCONF" # Let the user override the test with a dos path. + ;; + *) + IFS="${IFS= }"; ac_save_ifs="$IFS"; IFS=":" + ac_dummy="$PATH" + for ac_dir in $ac_dummy; do + test -z "$ac_dir" && ac_dir=. + if test -f $ac_dir/$ac_word; then + ac_cv_path_GETCONF="$ac_dir/$ac_word" + break + fi + done + IFS="$ac_save_ifs" + ;; +esac +fi +GETCONF="$ac_cv_path_GETCONF" +if test -n "$GETCONF"; then + echo "$ac_t""$GETCONF" 1>&6 +else + echo "$ac_t""no" 1>&6 +fi + + if test -z "$GETCONF" ; then + { echo "configure: error: getconf required to configure large file support" 1>&2; exit 1; } + fi + LFS_CFLAGS=`$GETCONF LFS_CFLAGS` + LFS_LDFLAGS=`$GETCONF LFS_LDFLAGS` + LFS_LIBS=`$GETCONF LFS_LIBS` + ;; + *) + echo "$ac_t""maybe" 1>&6 + LFS_CFLAGS="-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64" + LFS_LDFLAGS="" + LFS_LIBS="" + ;; + esac + + + +fi + +echo $ac_n "checking for ANSI C header files""... $ac_c" 1>&6 +echo "configure:8385: checking for ANSI C header files" >&5 +if eval "test \"`echo '$''{'ac_cv_header_stdc'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +#include +#include +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:8398: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + rm -rf conftest* + ac_cv_header_stdc=yes +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + ac_cv_header_stdc=no +fi +rm -f conftest* + +if test $ac_cv_header_stdc = yes; then + # SunOS 4.x string.h does not declare mem*, contrary to ANSI. +cat > conftest.$ac_ext < +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "memchr" >/dev/null 2>&1; then + : +else + rm -rf conftest* + ac_cv_header_stdc=no +fi +rm -f conftest* + +fi + +if test $ac_cv_header_stdc = yes; then + # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. +cat > conftest.$ac_ext < +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "free" >/dev/null 2>&1; then + : +else + rm -rf conftest* + ac_cv_header_stdc=no +fi +rm -f conftest* + +fi + +if test $ac_cv_header_stdc = yes; then + # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi. +if test "$cross_compiling" = yes; then + : +else + cat > conftest.$ac_ext < +#define ISLOWER(c) ('a' <= (c) && (c) <= 'z') +#define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) +#define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) +int main () { int i; for (i = 0; i < 256; i++) +if (XOR (islower (i), ISLOWER (i)) || toupper (i) != TOUPPER (i)) exit(2); +exit (0); } + +EOF +if { (eval echo configure:8465: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + : +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + ac_cv_header_stdc=no +fi +rm -fr conftest* +fi + +fi +fi + +echo "$ac_t""$ac_cv_header_stdc" 1>&6 +if test $ac_cv_header_stdc = yes; then + cat >> confdefs.h <<\EOF +#define STDC_HEADERS 1 +EOF + +fi + + +if test x"$ac_cv_header_stdc" = xno ; then + for ac_hdr in memory.h stdlib.h strings.h +do +ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'` +echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6 +echo "configure:8494: checking for $ac_hdr" >&5 +if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:8504: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + rm -rf conftest* + eval "ac_cv_header_$ac_safe=yes" +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_header_$ac_safe=no" +fi +rm -f conftest* +fi +if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'` + cat >> confdefs.h <&6 +fi +done + + for ac_func in memcpy strchr +do +echo $ac_n "checking for $ac_func""... $ac_c" 1>&6 +echo "configure:8533: checking for $ac_func" >&5 +if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +/* Override any gcc2 internal prototype to avoid an error. */ +/* We use char because int might match the return type of a gcc2 + builtin and then its argument prototype would still apply. */ +char $ac_func(); + +int main() { + +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined (__stub_$ac_func) || defined (__stub___$ac_func) +choke me +#else +$ac_func(); +#endif + +; return 0; } +EOF +if { (eval echo configure:8561: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_func_$ac_func=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_func_$ac_func=no" +fi +rm -f conftest* +fi + +if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` + cat >> confdefs.h <&6 +fi +done + +fi + +ac_header_dirent=no +for ac_hdr in dirent.h sys/ndir.h sys/dir.h ndir.h +do +ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'` +echo $ac_n "checking for $ac_hdr that defines DIR""... $ac_c" 1>&6 +echo "configure:8592: checking for $ac_hdr that defines DIR" >&5 +if eval "test \"`echo '$''{'ac_cv_header_dirent_$ac_safe'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include <$ac_hdr> +int main() { +DIR *dirp = 0; +; return 0; } +EOF +if { (eval echo configure:8605: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + eval "ac_cv_header_dirent_$ac_safe=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_header_dirent_$ac_safe=no" +fi +rm -f conftest* +fi +if eval "test \"`echo '$ac_cv_header_dirent_'$ac_safe`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'` + cat >> confdefs.h <&6 +fi +done +# Two versions of opendir et al. are in -ldir and -lx on SCO Xenix. +if test $ac_header_dirent = dirent.h; then +echo $ac_n "checking for opendir in -ldir""... $ac_c" 1>&6 +echo "configure:8630: checking for opendir in -ldir" >&5 +ac_lib_var=`echo dir'_'opendir | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-ldir $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + LIBS="$LIBS -ldir" +else + echo "$ac_t""no" 1>&6 +fi + +else +echo $ac_n "checking for opendir in -lx""... $ac_c" 1>&6 +echo "configure:8671: checking for opendir in -lx" >&5 +ac_lib_var=`echo x'_'opendir | sed 'y%./+-%__p_%'` +if eval "test \"`echo '$''{'ac_cv_lib_$ac_lib_var'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_save_LIBS="$LIBS" +LIBS="-lx $LIBS" +cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_lib_$ac_lib_var=no" +fi +rm -f conftest* +LIBS="$ac_save_LIBS" + +fi +if eval "test \"`echo '$ac_cv_lib_'$ac_lib_var`\" = yes"; then + echo "$ac_t""yes" 1>&6 + LIBS="$LIBS -lx" +else + echo "$ac_t""no" 1>&6 +fi + +fi + +echo $ac_n "checking whether time.h and sys/time.h may both be included""... $ac_c" 1>&6 +echo "configure:8713: checking whether time.h and sys/time.h may both be included" >&5 +if eval "test \"`echo '$''{'ac_cv_header_time'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +#include +int main() { +struct tm *tp; +; return 0; } +EOF +if { (eval echo configure:8727: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + ac_cv_header_time=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + ac_cv_header_time=no +fi +rm -f conftest* +fi + +echo "$ac_t""$ac_cv_header_time" 1>&6 +if test $ac_cv_header_time = yes; then + cat >> confdefs.h <<\EOF +#define TIME_WITH_SYS_TIME 1 +EOF + +fi + +echo $ac_n "checking for sys/wait.h that is POSIX.1 compatible""... $ac_c" 1>&6 +echo "configure:8748: checking for sys/wait.h that is POSIX.1 compatible" >&5 +if eval "test \"`echo '$''{'ac_cv_header_sys_wait_h'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +#ifndef WEXITSTATUS +#define WEXITSTATUS(stat_val) ((unsigned)(stat_val) >> 8) +#endif +#ifndef WIFEXITED +#define WIFEXITED(stat_val) (((stat_val) & 255) == 0) +#endif +int main() { +int s; +wait (&s); +s = WIFEXITED (s) ? WEXITSTATUS (s) : 1; +; return 0; } +EOF +if { (eval echo configure:8769: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + ac_cv_header_sys_wait_h=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + ac_cv_header_sys_wait_h=no +fi +rm -f conftest* +fi + +echo "$ac_t""$ac_cv_header_sys_wait_h" 1>&6 +if test $ac_cv_header_sys_wait_h = yes; then + cat >> confdefs.h <<\EOF +#define HAVE_SYS_WAIT_H 1 +EOF + +fi + + +for ac_hdr in crypt.h inttypes.h limits.h ndbm.h pam/pam_appl.h stdbool.h \ + stddef.h stdint.h string.h sys/bitypes.h sys/filio.h \ + sys/loadavg.h sys/param.h sys/select.h sys/sysinfo.h \ + sys/time.h unistd.h +do +ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'` +echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6 +echo "configure:8797: checking for $ac_hdr" >&5 +if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:8807: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + rm -rf conftest* + eval "ac_cv_header_$ac_safe=yes" +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_header_$ac_safe=no" +fi +rm -f conftest* +fi +if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'` + cat >> confdefs.h <&6 +fi +done + + +if test x"$ac_cv_header_ndbm_h" = xno ; then + for ac_hdr in db1/ndbm.h gdbm-ndbm.h +do +ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'` +echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6 +echo "configure:8839: checking for $ac_hdr" >&5 +if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:8849: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + rm -rf conftest* + eval "ac_cv_header_$ac_safe=yes" +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_header_$ac_safe=no" +fi +rm -f conftest* +fi +if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'` + cat >> confdefs.h <&6 +fi +done + +fi + + +echo $ac_n "checking whether h_errno must be declared""... $ac_c" 1>&6 +echo "configure:8879: checking whether h_errno must be declared" >&5 +if eval "test \"`echo '$''{'inn_cv_herrno_need_decl'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +int main() { +h_errno = 0; +; return 0; } +EOF +if { (eval echo configure:8891: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_herrno_need_decl=no +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_herrno_need_decl=yes +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_herrno_need_decl" 1>&6 +if test "$inn_cv_herrno_need_decl" = yes ; then + cat >> confdefs.h <<\EOF +#define NEED_HERRNO_DECLARATION 1 +EOF + +fi + + + + +echo $ac_n "checking whether inet_aton must be declared""... $ac_c" 1>&6 +echo "configure:8915: checking whether inet_aton must be declared" >&5 +if eval "test \"`echo '$''{'inn_cv_decl_needed_inet_aton'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +#if STDC_HEADERS +# include +# include +#else +# if HAVE_STDLIB_H +# include +# endif +# if !HAVE_STRCHR +# define strchr index +# define strrchr rindex +# endif +#endif +#if HAVE_STRING_H +# if !STDC_HEADERS && HAVE_MEMORY_H +# include +# endif +# include +#else +# if HAVE_STRINGS_H +# include +# endif +#endif +#if HAVE_UNISTD_H +# include +#endif +#include +#include +int main() { +char *(*pfn) = (char *(*)) inet_aton +; return 0; } +EOF +if { (eval echo configure:8955: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_decl_needed_inet_aton=no +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_decl_needed_inet_aton=yes +fi +rm -f conftest* +fi + +if test $inn_cv_decl_needed_inet_aton = yes ; then + echo "$ac_t""yes" 1>&6 + cat >> confdefs.h <<\EOF +#define NEED_DECLARATION_INET_ATON 1 +EOF + +else + echo "$ac_t""no" 1>&6 +fi +echo $ac_n "checking whether inet_ntoa must be declared""... $ac_c" 1>&6 +echo "configure:8977: checking whether inet_ntoa must be declared" >&5 +if eval "test \"`echo '$''{'inn_cv_decl_needed_inet_ntoa'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +#if STDC_HEADERS +# include +# include +#else +# if HAVE_STDLIB_H +# include +# endif +# if !HAVE_STRCHR +# define strchr index +# define strrchr rindex +# endif +#endif +#if HAVE_STRING_H +# if !STDC_HEADERS && HAVE_MEMORY_H +# include +# endif +# include +#else +# if HAVE_STRINGS_H +# include +# endif +#endif +#if HAVE_UNISTD_H +# include +#endif +#include +#include +int main() { +char *(*pfn) = (char *(*)) inet_ntoa +; return 0; } +EOF +if { (eval echo configure:9017: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_decl_needed_inet_ntoa=no +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_decl_needed_inet_ntoa=yes +fi +rm -f conftest* +fi + +if test $inn_cv_decl_needed_inet_ntoa = yes ; then + echo "$ac_t""yes" 1>&6 + cat >> confdefs.h <<\EOF +#define NEED_DECLARATION_INET_NTOA 1 +EOF + +else + echo "$ac_t""no" 1>&6 +fi +echo $ac_n "checking whether snprintf must be declared""... $ac_c" 1>&6 +echo "configure:9039: checking whether snprintf must be declared" >&5 +if eval "test \"`echo '$''{'inn_cv_decl_needed_snprintf'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +#if STDC_HEADERS +# include +# include +#else +# if HAVE_STDLIB_H +# include +# endif +# if !HAVE_STRCHR +# define strchr index +# define strrchr rindex +# endif +#endif +#if HAVE_STRING_H +# if !STDC_HEADERS && HAVE_MEMORY_H +# include +# endif +# include +#else +# if HAVE_STRINGS_H +# include +# endif +#endif +#if HAVE_UNISTD_H +# include +#endif + +int main() { +char *(*pfn) = (char *(*)) snprintf +; return 0; } +EOF +if { (eval echo configure:9078: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_decl_needed_snprintf=no +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_decl_needed_snprintf=yes +fi +rm -f conftest* +fi + +if test $inn_cv_decl_needed_snprintf = yes ; then + echo "$ac_t""yes" 1>&6 + cat >> confdefs.h <<\EOF +#define NEED_DECLARATION_SNPRINTF 1 +EOF + +else + echo "$ac_t""no" 1>&6 +fi +echo $ac_n "checking whether vsnprintf must be declared""... $ac_c" 1>&6 +echo "configure:9100: checking whether vsnprintf must be declared" >&5 +if eval "test \"`echo '$''{'inn_cv_decl_needed_vsnprintf'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +#if STDC_HEADERS +# include +# include +#else +# if HAVE_STDLIB_H +# include +# endif +# if !HAVE_STRCHR +# define strchr index +# define strrchr rindex +# endif +#endif +#if HAVE_STRING_H +# if !STDC_HEADERS && HAVE_MEMORY_H +# include +# endif +# include +#else +# if HAVE_STRINGS_H +# include +# endif +#endif +#if HAVE_UNISTD_H +# include +#endif + +int main() { +char *(*pfn) = (char *(*)) vsnprintf +; return 0; } +EOF +if { (eval echo configure:9139: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_decl_needed_vsnprintf=no +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_decl_needed_vsnprintf=yes +fi +rm -f conftest* +fi + +if test $inn_cv_decl_needed_vsnprintf = yes ; then + echo "$ac_t""yes" 1>&6 + cat >> confdefs.h <<\EOF +#define NEED_DECLARATION_VSNPRINTF 1 +EOF + +else + echo "$ac_t""no" 1>&6 +fi + +echo $ac_n "checking whether byte ordering is bigendian""... $ac_c" 1>&6 +echo "configure:9162: checking whether byte ordering is bigendian" >&5 +if eval "test \"`echo '$''{'ac_cv_c_bigendian'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + ac_cv_c_bigendian=unknown +# See if sys/param.h defines the BYTE_ORDER macro. +cat > conftest.$ac_ext < +#include +int main() { + +#if !BYTE_ORDER || !BIG_ENDIAN || !LITTLE_ENDIAN + bogus endian macros +#endif +; return 0; } +EOF +if { (eval echo configure:9180: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + # It does; now see whether it defined to BIG_ENDIAN or not. +cat > conftest.$ac_ext < +#include +int main() { + +#if BYTE_ORDER != BIG_ENDIAN + not big endian +#endif +; return 0; } +EOF +if { (eval echo configure:9195: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + ac_cv_c_bigendian=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + ac_cv_c_bigendian=no +fi +rm -f conftest* +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 +fi +rm -f conftest* +if test $ac_cv_c_bigendian = unknown; then +if test "$cross_compiling" = yes; then + { echo "configure: error: can not run test program while cross compiling" 1>&2; exit 1; } +else + cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + ac_cv_c_bigendian=no +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + ac_cv_c_bigendian=yes +fi +rm -fr conftest* +fi + +fi +fi + +echo "$ac_t""$ac_cv_c_bigendian" 1>&6 +if test $ac_cv_c_bigendian = yes; then + cat >> confdefs.h <<\EOF +#define WORDS_BIGENDIAN 1 +EOF + +fi + +echo $ac_n "checking for working const""... $ac_c" 1>&6 +echo "configure:9252: checking for working const" >&5 +if eval "test \"`echo '$''{'ac_cv_c_const'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext <j = 5; +} +{ /* ULTRIX-32 V3.1 (Rev 9) vcc rejects this */ + const int foo = 10; +} + +; return 0; } +EOF +if { (eval echo configure:9306: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + ac_cv_c_const=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + ac_cv_c_const=no +fi +rm -f conftest* +fi + +echo "$ac_t""$ac_cv_c_const" 1>&6 +if test $ac_cv_c_const = no; then + cat >> confdefs.h <<\EOF +#define const +EOF + +fi + +echo $ac_n "checking for st_blksize in struct stat""... $ac_c" 1>&6 +echo "configure:9327: checking for st_blksize in struct stat" >&5 +if eval "test \"`echo '$''{'ac_cv_struct_st_blksize'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +int main() { +struct stat s; s.st_blksize; +; return 0; } +EOF +if { (eval echo configure:9340: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + ac_cv_struct_st_blksize=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + ac_cv_struct_st_blksize=no +fi +rm -f conftest* +fi + +echo "$ac_t""$ac_cv_struct_st_blksize" 1>&6 +if test $ac_cv_struct_st_blksize = yes; then + cat >> confdefs.h <<\EOF +#define HAVE_ST_BLKSIZE 1 +EOF + +fi + +echo $ac_n "checking whether struct tm is in sys/time.h or time.h""... $ac_c" 1>&6 +echo "configure:9361: checking whether struct tm is in sys/time.h or time.h" >&5 +if eval "test \"`echo '$''{'ac_cv_struct_tm'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +int main() { +struct tm *tp; tp->tm_sec; +; return 0; } +EOF +if { (eval echo configure:9374: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + ac_cv_struct_tm=time.h +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + ac_cv_struct_tm=sys/time.h +fi +rm -f conftest* +fi + +echo "$ac_t""$ac_cv_struct_tm" 1>&6 +if test $ac_cv_struct_tm = sys/time.h; then + cat >> confdefs.h <<\EOF +#define TM_IN_SYS_TIME 1 +EOF + +fi + +echo $ac_n "checking for size_t""... $ac_c" 1>&6 +echo "configure:9395: checking for size_t" >&5 +if eval "test \"`echo '$''{'ac_cv_type_size_t'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#if STDC_HEADERS +#include +#include +#endif +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "(^|[^a-zA-Z_0-9])size_t[^a-zA-Z_0-9]" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_type_size_t=yes +else + rm -rf conftest* + ac_cv_type_size_t=no +fi +rm -f conftest* + +fi +echo "$ac_t""$ac_cv_type_size_t" 1>&6 +if test $ac_cv_type_size_t = no; then + cat >> confdefs.h <<\EOF +#define size_t unsigned +EOF + +fi + +echo $ac_n "checking for uid_t in sys/types.h""... $ac_c" 1>&6 +echo "configure:9428: checking for uid_t in sys/types.h" >&5 +if eval "test \"`echo '$''{'ac_cv_type_uid_t'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "uid_t" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_type_uid_t=yes +else + rm -rf conftest* + ac_cv_type_uid_t=no +fi +rm -f conftest* + +fi + +echo "$ac_t""$ac_cv_type_uid_t" 1>&6 +if test $ac_cv_type_uid_t = no; then + cat >> confdefs.h <<\EOF +#define uid_t int +EOF + + cat >> confdefs.h <<\EOF +#define gid_t int +EOF + +fi + +echo $ac_n "checking for off_t""... $ac_c" 1>&6 +echo "configure:9462: checking for off_t" >&5 +if eval "test \"`echo '$''{'ac_cv_type_off_t'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#if STDC_HEADERS +#include +#include +#endif +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "(^|[^a-zA-Z_0-9])off_t[^a-zA-Z_0-9]" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_type_off_t=yes +else + rm -rf conftest* + ac_cv_type_off_t=no +fi +rm -f conftest* + +fi +echo "$ac_t""$ac_cv_type_off_t" 1>&6 +if test $ac_cv_type_off_t = no; then + cat >> confdefs.h <<\EOF +#define off_t long +EOF + +fi + +echo $ac_n "checking for pid_t""... $ac_c" 1>&6 +echo "configure:9495: checking for pid_t" >&5 +if eval "test \"`echo '$''{'ac_cv_type_pid_t'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#if STDC_HEADERS +#include +#include +#endif +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "(^|[^a-zA-Z_0-9])pid_t[^a-zA-Z_0-9]" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_type_pid_t=yes +else + rm -rf conftest* + ac_cv_type_pid_t=no +fi +rm -f conftest* + +fi +echo "$ac_t""$ac_cv_type_pid_t" 1>&6 +if test $ac_cv_type_pid_t = no; then + cat >> confdefs.h <<\EOF +#define pid_t int +EOF + +fi + +echo $ac_n "checking for ptrdiff_t""... $ac_c" 1>&6 +echo "configure:9528: checking for ptrdiff_t" >&5 +if eval "test \"`echo '$''{'ac_cv_type_ptrdiff_t'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#if STDC_HEADERS +#include +#include +#endif +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "(^|[^a-zA-Z_0-9])ptrdiff_t[^a-zA-Z_0-9]" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_type_ptrdiff_t=yes +else + rm -rf conftest* + ac_cv_type_ptrdiff_t=no +fi +rm -f conftest* + +fi +echo "$ac_t""$ac_cv_type_ptrdiff_t" 1>&6 +if test $ac_cv_type_ptrdiff_t = no; then + cat >> confdefs.h <<\EOF +#define ptrdiff_t long +EOF + +fi + +echo $ac_n "checking for ssize_t""... $ac_c" 1>&6 +echo "configure:9561: checking for ssize_t" >&5 +if eval "test \"`echo '$''{'ac_cv_type_ssize_t'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#if STDC_HEADERS +#include +#include +#endif +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "(^|[^a-zA-Z_0-9])ssize_t[^a-zA-Z_0-9]" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_type_ssize_t=yes +else + rm -rf conftest* + ac_cv_type_ssize_t=no +fi +rm -f conftest* + +fi +echo "$ac_t""$ac_cv_type_ssize_t" 1>&6 +if test $ac_cv_type_ssize_t = no; then + cat >> confdefs.h <<\EOF +#define ssize_t int +EOF + +fi + + + +echo $ac_n "checking for C99 variadic macros""... $ac_c" 1>&6 +echo "configure:9596: checking for C99 variadic macros" >&5 +if eval "test \"`echo '$''{'inn_cv_c_c99_vamacros'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#define error(...) fprintf(stderr, __VA_ARGS__) +int main() { +error("foo"); error("foo %d", 0); return 0; +; return 0; } +EOF +if { (eval echo configure:9609: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_c_c99_vamacros=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_c_c99_vamacros=no +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_c_c99_vamacros" 1>&6 +if test $inn_cv_c_c99_vamacros = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_C99_VAMACROS 1 +EOF + +fi + + +echo $ac_n "checking for GNU-style variadic macros""... $ac_c" 1>&6 +echo "configure:9631: checking for GNU-style variadic macros" >&5 +if eval "test \"`echo '$''{'inn_cv_c_gnu_vamacros'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#define error(args...) fprintf(stderr, args) +int main() { +error("foo"); error("foo %d", 0); return 0; +; return 0; } +EOF +if { (eval echo configure:9644: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_c_gnu_vamacros=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_c_gnu_vamacros=no +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_c_gnu_vamacros" 1>&6 +if test $inn_cv_c_gnu_vamacros = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_GNU_VAMACROS 1 +EOF + +fi + + +echo $ac_n "checking for long long int""... $ac_c" 1>&6 +echo "configure:9666: checking for long long int" >&5 +if eval "test \"`echo '$''{'inn_cv_c_long_long'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext <&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_c_long_long=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_c_long_long=no +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_c_long_long" 1>&6 +if test $inn_cv_c_long_long = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_LONG_LONG 1 +EOF + +fi + + + + +echo $ac_n "checking for sig_atomic_t""... $ac_c" 1>&6 +echo "configure:9702: checking for sig_atomic_t" >&5 +if eval "test \"`echo '$''{'ac_cv_type_sig_atomic_t'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#ifdef STDC_HEADERS +# include +# include +#endif +#include +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "(^|[^a-zA-Z_0-9])sig_atomic_t[^a-zA-Z_0-9]" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_type_sig_atomic_t=yes +else + rm -rf conftest* + ac_cv_type_sig_atomic_t=no + +fi +rm -f conftest* + +fi + +echo "$ac_t""$ac_cv_type_sig_atomic_t" 1>&6 +if test x"$ac_cv_type_sig_atomic_t" = xno ; then + cat >> confdefs.h <&6 +echo "configure:9738: checking for socklen_t" >&5 +if eval "test \"`echo '$''{'ac_cv_type_socklen_t'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#ifdef STDC_HEADERS +# include +# include +#endif +#include +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "(^|[^a-zA-Z_0-9])socklen_t[^a-zA-Z_0-9]" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_type_socklen_t=yes +else + rm -rf conftest* + ac_cv_type_socklen_t=no + +fi +rm -f conftest* + +fi + +echo "$ac_t""$ac_cv_type_socklen_t" 1>&6 +if test x"$ac_cv_type_socklen_t" = xno ; then + cat >> confdefs.h <&6 +echo "configure:9777: checking value of IOV_MAX" >&5 +if eval "test \"`echo '$''{'inn_cv_macro_iov_max'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + 16 +else + cat > conftest.$ac_ext < +#include +#include +#include +#include +#ifdef HAVE_UNISTD_H +# include +#endif +#ifdef HAVE_LIMITS_H +# include +#endif + +int +main () +{ + int fd, size; + struct iovec array[1024]; + char data; + + FILE *f = fopen ("conftestval", "w"); + if (!f) return 1; +#ifdef IOV_MAX + fprintf (f, "set in limits.h\n"); +#else +# ifdef UIO_MAXIOV + fprintf (f, "%d\n", UIO_MAXIOV); +# else + fd = open ("/dev/null", O_WRONLY, 0666); + if (fd < 0) return 1; + for (size = 1; size <= 1024; size++) + { + array[size - 1].iov_base = &data; + array[size - 1].iov_len = sizeof data; + if (writev (fd, array, size) < 0) + { + if (errno != EINVAL) return 1; + fprintf(f, "%d\n", size - 1); + exit (0); + } + } + fprintf (f, "1024\n"); +# endif /* UIO_MAXIOV */ +#endif /* IOV_MAX */ + return 0; +} +EOF +if { (eval echo configure:9833: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + inn_cv_macro_iov_max=`cat conftestval` +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + inn_cv_macro_iov_max=error +fi +rm -fr conftest* +fi + +if test x"$inn_cv_macro_iov_max" = xerror ; then + echo "configure: warning: probe failure, assuming 16" 1>&2 + inn_cv_macro_iov_max=16 +fi +fi + +echo "$ac_t""$inn_cv_macro_iov_max" 1>&6 +if test x"$inn_cv_macro_iov_max" != x"set in limits.h" ; then + cat >> confdefs.h <&6 +echo "configure:9861: checking for SUN_LEN" >&5 +if eval "test \"`echo '$''{'inn_cv_macro_sun_len'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +int main() { +struct sockaddr_un sun; +int i; + +i = SUN_LEN(&sun); +; return 0; } +EOF +if { (eval echo configure:9877: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + inn_cv_macro_sun_len=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_macro_sun_len=no +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_macro_sun_len" 1>&6 +if test x"$inn_cv_macro_sun_len" = xyes ; then + cat >> confdefs.h <<\EOF +#define HAVE_SUN_LEN 1 +EOF + +fi + + +echo $ac_n "checking for tm_gmtoff in struct tm""... $ac_c" 1>&6 +echo "configure:9899: checking for tm_gmtoff in struct tm" >&5 +if eval "test \"`echo '$''{'inn_cv_struct_tm_gmtoff'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +int main() { +struct tm t; t.tm_gmtoff = 3600 +; return 0; } +EOF +if { (eval echo configure:9911: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + inn_cv_struct_tm_gmtoff=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_struct_tm_gmtoff=no +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_struct_tm_gmtoff" 1>&6 +if test x"$inn_cv_struct_tm_gmtoff" = xyes ; then + cat >> confdefs.h <<\EOF +#define HAVE_TM_GMTOFF 1 +EOF + +fi + + +echo $ac_n "checking for tm_zone in struct tm""... $ac_c" 1>&6 +echo "configure:9933: checking for tm_zone in struct tm" >&5 +if eval "test \"`echo '$''{'inn_cv_struct_tm_zone'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +int main() { +struct tm t; t.tm_zone = "UTC" +; return 0; } +EOF +if { (eval echo configure:9945: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + inn_cv_struct_tm_zone=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_struct_tm_zone=no +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_struct_tm_zone" 1>&6 +if test x"$inn_cv_struct_tm_zone" = xyes ; then + cat >> confdefs.h <<\EOF +#define HAVE_TM_ZONE 1 +EOF + +fi + + +echo $ac_n "checking for timezone variable""... $ac_c" 1>&6 +echo "configure:9967: checking for timezone variable" >&5 +if eval "test \"`echo '$''{'inn_cv_var_timezone'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +int main() { +timezone = 3600; altzone = 7200 +; return 0; } +EOF +if { (eval echo configure:9979: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + inn_cv_var_timezone=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_var_timezone=no +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_var_timezone" 1>&6 +if test x"$inn_cv_var_timezone" = xyes ; then + cat >> confdefs.h <<\EOF +#define HAVE_VAR_TIMEZONE 1 +EOF + +fi + + +echo $ac_n "checking for tzname variable""... $ac_c" 1>&6 +echo "configure:10001: checking for tzname variable" >&5 +if eval "test \"`echo '$''{'inn_cv_var_tzname'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +int main() { +*tzname = "UTC" +; return 0; } +EOF +if { (eval echo configure:10013: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + inn_cv_var_tzname=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_var_tzname=no +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_var_tzname" 1>&6 +if test x"$inn_cv_var_tzname" = xyes ; then + cat >> confdefs.h <<\EOF +#define HAVE_VAR_TZNAME 1 +EOF + +fi + + + +echo $ac_n "checking size of int""... $ac_c" 1>&6 +echo "configure:10036: checking size of int" >&5 +if eval "test \"`echo '$''{'ac_cv_sizeof_int'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + ac_cv_sizeof_int=4 +else + cat > conftest.$ac_ext < +main() +{ + FILE *f = fopen("conftestval", "w"); + if (!f) exit(1); + fprintf(f, "%d\n", sizeof(int)); + exit(0); +} +EOF +if { (eval echo configure:10055: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + ac_cv_sizeof_int=`cat conftestval` +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + ac_cv_sizeof_int=0 +fi +rm -fr conftest* +fi + + +fi +echo "$ac_t""$ac_cv_sizeof_int" 1>&6 +if test x"$ac_cv_sizeof_int" = x"4" ; then + INN_INT32=int +else + echo $ac_n "checking size of long""... $ac_c" 1>&6 +echo "configure:10074: checking size of long" >&5 +if eval "test \"`echo '$''{'ac_cv_sizeof_long'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + ac_cv_sizeof_long=4 +else + cat > conftest.$ac_ext < +main() +{ + FILE *f = fopen("conftestval", "w"); + if (!f) exit(1); + fprintf(f, "%d\n", sizeof(long)); + exit(0); +} +EOF +if { (eval echo configure:10093: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + ac_cv_sizeof_long=`cat conftestval` +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + ac_cv_sizeof_long=0 +fi +rm -fr conftest* +fi + + +fi +echo "$ac_t""$ac_cv_sizeof_long" 1>&6 +if test x"$ac_cv_sizeof_long" = x"4" ; then + INN_INT32=long +else + echo $ac_n "checking size of short""... $ac_c" 1>&6 +echo "configure:10112: checking size of short" >&5 +if eval "test \"`echo '$''{'ac_cv_sizeof_short'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + ac_cv_sizeof_short=2 +else + cat > conftest.$ac_ext < +main() +{ + FILE *f = fopen("conftestval", "w"); + if (!f) exit(1); + fprintf(f, "%d\n", sizeof(short)); + exit(0); +} +EOF +if { (eval echo configure:10131: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + ac_cv_sizeof_short=`cat conftestval` +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + ac_cv_sizeof_short=0 +fi +rm -fr conftest* +fi + + +fi +echo "$ac_t""$ac_cv_sizeof_short" 1>&6 +if test x"$ac_cv_sizeof_short" = x"4" ; then + INN_INT32=short +else + : +fi + +fi + +fi + + +echo $ac_n "checking for int32_t""... $ac_c" 1>&6 +echo "configure:10158: checking for int32_t" >&5 +if eval "test \"`echo '$''{'ac_cv_type_int32_t'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#ifdef STDC_HEADERS +# include +# include +#endif +#ifdef HAVE_STDINT_H +# include +#endif +#ifdef HAVE_SYS_BITYPES_H +# include +#endif + +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "(^|[^a-zA-Z_0-9])int32_t[^a-zA-Z_0-9]" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_type_int32_t=yes +else + rm -rf conftest* + ac_cv_type_int32_t=no + +fi +rm -f conftest* + +fi + +echo "$ac_t""$ac_cv_type_int32_t" 1>&6 +if test x"$ac_cv_type_int32_t" = xno ; then + cat >> confdefs.h <&6 +echo "configure:10201: checking for uint32_t" >&5 +if eval "test \"`echo '$''{'ac_cv_type_uint32_t'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#ifdef STDC_HEADERS +# include +# include +#endif +#ifdef HAVE_STDINT_H +# include +#endif +#ifdef HAVE_SYS_BITYPES_H +# include +#endif + +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "(^|[^a-zA-Z_0-9])uint32_t[^a-zA-Z_0-9]" >/dev/null 2>&1; then + rm -rf conftest* + ac_cv_type_uint32_t=yes +else + rm -rf conftest* + ac_cv_type_uint32_t=no + +fi +rm -f conftest* + +fi + +echo "$ac_t""$ac_cv_type_uint32_t" 1>&6 +if test x"$ac_cv_type_uint32_t" = xno ; then + cat >> confdefs.h <&6 +echo "configure:10243: checking for 8-bit clean memcmp" >&5 +if eval "test \"`echo '$''{'ac_cv_func_memcmp_clean'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + ac_cv_func_memcmp_clean=no +else + cat > conftest.$ac_ext <&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + ac_cv_func_memcmp_clean=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + ac_cv_func_memcmp_clean=no +fi +rm -fr conftest* +fi + +fi + +echo "$ac_t""$ac_cv_func_memcmp_clean" 1>&6 +test $ac_cv_func_memcmp_clean = no && LIBOBJS="$LIBOBJS memcmp.${ac_objext}" + +echo $ac_n "checking return type of signal handlers""... $ac_c" 1>&6 +echo "configure:10279: checking return type of signal handlers" >&5 +if eval "test \"`echo '$''{'ac_cv_type_signal'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +#ifdef signal +#undef signal +#endif +#ifdef __cplusplus +extern "C" void (*signal (int, void (*)(int)))(int); +#else +void (*signal ()) (); +#endif + +int main() { +int i; +; return 0; } +EOF +if { (eval echo configure:10301: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + ac_cv_type_signal=void +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + ac_cv_type_signal=int +fi +rm -f conftest* +fi + +echo "$ac_t""$ac_cv_type_signal" 1>&6 +cat >> confdefs.h <&6 +echo "configure:10324: checking for working inet_ntoa" >&5 +if eval "test \"`echo '$''{'inn_cv_func_inet_ntoa_works'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + inn_cv_func_inet_ntoa_works=no +else + cat > conftest.$ac_ext < +#include +#include +#include +#if STDC_HEADERS || HAVE_STRING_H +# include +#endif + +int +main () +{ + struct in_addr in; + in.s_addr = htonl (0x7f000000L); + return (!strcmp (inet_ntoa (in), "127.0.0.0") ? 0 : 1); +} +EOF +if { (eval echo configure:10350: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + inn_cv_func_inet_ntoa_works=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + inn_cv_func_inet_ntoa_works=no +fi +rm -fr conftest* +fi + +fi + +echo "$ac_t""$inn_cv_func_inet_ntoa_works" 1>&6 +if test "$inn_cv_func_inet_ntoa_works" = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_INET_NTOA 1 +EOF + +else + LIBOBJS="$LIBOBJS inet_ntoa.${ac_objext}" +fi + + +echo $ac_n "checking whether struct sockaddr has sa_len""... $ac_c" 1>&6 +echo "configure:10376: checking whether struct sockaddr has sa_len" >&5 +if eval "test \"`echo '$''{'inn_cv_struct_sockaddr_sa_len'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < + #include + #include +int main() { +struct sockaddr sa; int x = sa.sa_len; +; return 0; } +EOF +if { (eval echo configure:10390: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_struct_sockaddr_sa_len=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_struct_sockaddr_sa_len=no +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_struct_sockaddr_sa_len" 1>&6 +if test "$inn_cv_struct_sockaddr_sa_len" = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_SOCKADDR_LEN 1 +EOF + +fi + + +echo $ac_n "checking for SA_LEN(s) macro""... $ac_c" 1>&6 +echo "configure:10412: checking for SA_LEN(s) macro" >&5 +if eval "test \"`echo '$''{'inn_cv_sa_len_macro'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < + #include + #include +int main() { +struct sockaddr sa; int x = SA_LEN(&sa); +; return 0; } +EOF +if { (eval echo configure:10426: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + inn_cv_sa_len_macro=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_sa_len_macro=no +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_sa_len_macro" 1>&6 +if test "$inn_cv_sa_len_macro" = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_SA_LEN_MACRO 1 +EOF + +fi + + + + +echo $ac_n "checking for struct sockaddr_storage""... $ac_c" 1>&6 +echo "configure:10450: checking for struct sockaddr_storage" >&5 +if eval "test \"`echo '$''{'inn_cv_struct_sockaddr_storage'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < + #include + #include +int main() { +struct sockaddr_storage ss; +; return 0; } +EOF +if { (eval echo configure:10464: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_struct_sockaddr_storage=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_struct_sockaddr_storage=no +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_struct_sockaddr_storage" 1>&6 +if test "$inn_cv_struct_sockaddr_storage" = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_SOCKADDR_STORAGE 1 +EOF + + echo $ac_n "checking for RFC 2553 style sockaddr_storage member names""... $ac_c" 1>&6 +echo "configure:10483: checking for RFC 2553 style sockaddr_storage member names" >&5 +if eval "test \"`echo '$''{'inn_cv_2553_ss_family'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < + #include + #include +int main() { +struct sockaddr_storage ss; int x=ss.ss_family; +; return 0; } +EOF +if { (eval echo configure:10497: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_2553_ss_family=no +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_2553_ss_family=yes +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_2553_ss_family" 1>&6 +if test "$inn_cv_2553_ss_family" = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_2553_STYLE_SS_FAMILY 1 +EOF + +fi +fi + + + + +if test "$inn_enable_ipv6_tests" = yes ; then + echo $ac_n "checking whether IN6_ARE_ADDR_EQUAL macro is broken""... $ac_c" 1>&6 +echo "configure:10523: checking whether IN6_ARE_ADDR_EQUAL macro is broken" >&5 +if eval "test \"`echo '$''{'inn_cv_in6_are_addr_equal_broken'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + inn_cv_in6_are_addr_equal_broken=no +else + cat > conftest.$ac_ext < +#include +#include +#include + +int +main () +{ + struct in6_addr a; + struct in6_addr b; + + inet_pton(AF_INET6,"fe80::1234:5678:abcd",&a); + inet_pton(AF_INET6,"fe80::1234:5678:abcd",&b); + return IN6_ARE_ADDR_EQUAL(&a,&b) ? 0 : 1; +} +EOF +if { (eval echo configure:10549: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + inn_cv_in6_are_addr_equal_broken=no +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + inn_cv_in6_are_addr_equal_broken=yes +fi +rm -fr conftest* +fi + +fi + +echo "$ac_t""$inn_cv_in6_are_addr_equal_broken" 1>&6 +if test "$inn_cv_in6_are_addr_equal_broken" = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_BROKEN_IN6_ARE_ADDR_EQUAL 1 +EOF + +fi +fi + + + + +echo $ac_n "checking for working snprintf""... $ac_c" 1>&6 +echo "configure:10576: checking for working snprintf" >&5 +if eval "test \"`echo '$''{'inn_cv_func_snprintf_works'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + inn_cv_func_snprintf_works=no +else + cat > conftest.$ac_ext < +#include + +char buf[2]; + +int +test (char *format, ...) +{ + va_list args; + int count; + + va_start (args, format); + count = vsnprintf (buf, sizeof buf, format, args); + va_end (args); + return count; +} + +int +main () +{ + return ((test ("%s", "abcd") == 4 && buf[0] == 'a' && buf[1] == '\0' + && snprintf(NULL, 0, "%s", "abcd") == 4) ? 0 : 1); +} +EOF +if { (eval echo configure:10610: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + inn_cv_func_snprintf_works=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + inn_cv_func_snprintf_works=no +fi +rm -fr conftest* +fi + +fi + +echo "$ac_t""$inn_cv_func_snprintf_works" 1>&6 +if test "$inn_cv_func_snprintf_works" = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_SNPRINTF 1 +EOF + +else + LIBOBJS="$LIBOBJS snprintf.${ac_objext}" +fi + +for ac_func in atexit getloadavg getrlimit getrusage getspnam setbuffer \ + sigaction setgroups setrlimit setsid socketpair statvfs \ + strncasecmp strtoul symlink sysconf +do +echo $ac_n "checking for $ac_func""... $ac_c" 1>&6 +echo "configure:10639: checking for $ac_func" >&5 +if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +/* Override any gcc2 internal prototype to avoid an error. */ +/* We use char because int might match the return type of a gcc2 + builtin and then its argument prototype would still apply. */ +char $ac_func(); + +int main() { + +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined (__stub_$ac_func) || defined (__stub___$ac_func) +choke me +#else +$ac_func(); +#endif + +; return 0; } +EOF +if { (eval echo configure:10667: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_func_$ac_func=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_func_$ac_func=no" +fi +rm -f conftest* +fi + +if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` + cat >> confdefs.h <&6 +fi +done + + +if test x"$ac_cv_func_getrlimit" = xno ; then + for ac_func in getdtablesize ulimit +do +echo $ac_n "checking for $ac_func""... $ac_c" 1>&6 +echo "configure:10696: checking for $ac_func" >&5 +if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +/* Override any gcc2 internal prototype to avoid an error. */ +/* We use char because int might match the return type of a gcc2 + builtin and then its argument prototype would still apply. */ +char $ac_func(); + +int main() { + +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined (__stub_$ac_func) || defined (__stub___$ac_func) +choke me +#else +$ac_func(); +#endif + +; return 0; } +EOF +if { (eval echo configure:10724: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_func_$ac_func=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_func_$ac_func=no" +fi +rm -f conftest* +fi + +if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` + cat >> confdefs.h <&6 +fi +done + +fi + +if test x"$ac_cv_func_statvfs" = xno ; then + for ac_func in statfs +do +echo $ac_n "checking for $ac_func""... $ac_c" 1>&6 +echo "configure:10754: checking for $ac_func" >&5 +if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +/* Override any gcc2 internal prototype to avoid an error. */ +/* We use char because int might match the return type of a gcc2 + builtin and then its argument prototype would still apply. */ +char $ac_func(); + +int main() { + +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined (__stub_$ac_func) || defined (__stub___$ac_func) +choke me +#else +$ac_func(); +#endif + +; return 0; } +EOF +if { (eval echo configure:10782: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_func_$ac_func=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_func_$ac_func=no" +fi +rm -f conftest* +fi + +if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` + cat >> confdefs.h <&6 +fi +done + + for ac_hdr in sys/vfs.h sys/mount.h +do +ac_safe=`echo "$ac_hdr" | sed 'y%./+-%__p_%'` +echo $ac_n "checking for $ac_hdr""... $ac_c" 1>&6 +echo "configure:10810: checking for $ac_hdr" >&5 +if eval "test \"`echo '$''{'ac_cv_header_$ac_safe'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +EOF +ac_try="$ac_cpp conftest.$ac_ext >/dev/null 2>conftest.out" +{ (eval echo configure:10820: \"$ac_try\") 1>&5; (eval $ac_try) 2>&5; } +ac_err=`grep -v '^ *+' conftest.out | grep -v "^conftest.${ac_ext}\$"` +if test -z "$ac_err"; then + rm -rf conftest* + eval "ac_cv_header_$ac_safe=yes" +else + echo "$ac_err" >&5 + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_header_$ac_safe=no" +fi +rm -f conftest* +fi +if eval "test \"`echo '$ac_cv_header_'$ac_safe`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_hdr=HAVE_`echo $ac_hdr | sed 'y%abcdefghijklmnopqrstuvwxyz./-%ABCDEFGHIJKLMNOPQRSTUVWXYZ___%'` + cat >> confdefs.h <&6 +fi +done + +fi + +for ac_func in fseeko ftello getpagesize hstrerror inet_aton mkstemp \ + pread pwrite seteuid strcasecmp strerror strlcat strlcpy \ + strspn setenv +do +echo $ac_n "checking for $ac_func""... $ac_c" 1>&6 +echo "configure:10853: checking for $ac_func" >&5 +if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +/* Override any gcc2 internal prototype to avoid an error. */ +/* We use char because int might match the return type of a gcc2 + builtin and then its argument prototype would still apply. */ +char $ac_func(); + +int main() { + +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined (__stub_$ac_func) || defined (__stub___$ac_func) +choke me +#else +$ac_func(); +#endif + +; return 0; } +EOF +if { (eval echo configure:10881: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_func_$ac_func=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_func_$ac_func=no" +fi +rm -f conftest* +fi + +if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` + cat >> confdefs.h <&6 +LIBOBJS="$LIBOBJS ${ac_func}.${ac_objext}" +fi +done + + + + + + + +if test "$ac_cv_func_fseeko" = no || test "$ac_cv_func_ftello" = no ; then + echo $ac_n "checking for off_t-compatible fpos_t""... $ac_c" 1>&6 +echo "configure:10914: checking for off_t-compatible fpos_t" >&5 +if eval "test \"`echo '$''{'inn_cv_type_fpos_t_large'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + inn_cv_type_fpos_t_large=no +else + cat > conftest.$ac_ext < +#include + +int +main () +{ + fpos_t fpos = 9223372036854775807ULL; + off_t off; + off = fpos; + exit(off == (off_t) 9223372036854775807ULL ? 0 : 1); +} +EOF +if { (eval echo configure:10936: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + inn_cv_type_fpos_t_large=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + inn_cv_type_fpos_t_large=no +fi +rm -fr conftest* +fi + +if test "$inn_cv_type_fpos_t_large" = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_LARGE_FPOS_T 1 +EOF + +fi +fi + +echo "$ac_t""$inn_cv_type_fpos_t_large" 1>&6 +fi + + + + + + + + + + + + + + + + +echo $ac_n "checking for working mmap""... $ac_c" 1>&6 +echo "configure:10975: checking for working mmap" >&5 +if eval "test \"`echo '$''{'inn_cv_func_mmap'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + inn_cv_func_mmap=no +else + cat > conftest.$ac_ext < +#include +#if STDC_HEADERS +# include +# include +#else +# if HAVE_STDLIB_H +# include +# endif +# if !HAVE_STRCHR +# define strchr index +# define strrchr rindex +# endif +#endif +#if HAVE_STRING_H +# if !STDC_HEADERS && HAVE_MEMORY_H +# include +# endif +# include +#else +# if HAVE_STRINGS_H +# include +# endif +#endif +#if HAVE_UNISTD_H +# include +#endif +#include +#include + +int +main() +{ + int *data, *data2; + int i, fd; + + /* First, make a file with some known garbage in it. Use something + larger than one page but still an odd page size. */ + data = malloc (20000); + if (!data) return 1; + for (i = 0; i < 20000 / sizeof (int); i++) + data[i] = rand(); + umask (0); + fd = creat ("conftestmmaps", 0600); + if (fd < 0) return 1; + if (write (fd, data, 20000) != 20000) return 1; + close (fd); + + /* Next, try to mmap the file and make sure we see the same garbage. */ + fd = open ("conftestmmaps", O_RDWR); + if (fd < 0) return 1; + data2 = mmap (0, 20000, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + if (data2 == (int *) -1) return 1; + for (i = 0; i < 20000 / sizeof (int); i++) + if (data[i] != data2[i]) + return 1; + + close (fd); + unlink ("conftestmmaps"); + return 0; +} +EOF +if { (eval echo configure:11047: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + inn_cv_func_mmap=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + inn_cv_func_mmap=no +fi +rm -fr conftest* +fi + +fi + +echo "$ac_t""$inn_cv_func_mmap" 1>&6 +if test $inn_cv_func_mmap = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_MMAP 1 +EOF + +fi +if test x"$inn_cv_func_mmap" = xyes ; then + for ac_func in madvise +do +echo $ac_n "checking for $ac_func""... $ac_c" 1>&6 +echo "configure:11072: checking for $ac_func" >&5 +if eval "test \"`echo '$''{'ac_cv_func_$ac_func'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +/* Override any gcc2 internal prototype to avoid an error. */ +/* We use char because int might match the return type of a gcc2 + builtin and then its argument prototype would still apply. */ +char $ac_func(); + +int main() { + +/* The GNU C library defines this for functions which it implements + to always fail with ENOSYS. Some functions are actually named + something starting with __ and the normal name is an alias. */ +#if defined (__stub_$ac_func) || defined (__stub___$ac_func) +choke me +#else +$ac_func(); +#endif + +; return 0; } +EOF +if { (eval echo configure:11100: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext}; then + rm -rf conftest* + eval "ac_cv_func_$ac_func=yes" +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + eval "ac_cv_func_$ac_func=no" +fi +rm -f conftest* +fi + +if eval "test \"`echo '$ac_cv_func_'$ac_func`\" = yes"; then + echo "$ac_t""yes" 1>&6 + ac_tr_func=HAVE_`echo $ac_func | tr 'abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'` + cat >> confdefs.h <&6 +fi +done + + echo $ac_n "checking whether mmap sees writes""... $ac_c" 1>&6 +echo "configure:11125: checking whether mmap sees writes" >&5 +if eval "test \"`echo '$''{'inn_cv_func_mmap_sees_writes'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + inn_cv_func_mmap_sees_writes=no +else + cat > conftest.$ac_ext < +#include +#include +#include +#if HAVE_UNISTD_H +# include +#endif +#include + +/* Fractional page is probably worst case. */ +static char zbuff[1024]; +static char fname[] = "conftestw"; + +int +main () +{ + char *map; + int i, fd; + + fd = open (fname, O_RDWR | O_CREAT, 0660); + if (fd < 0) return 1; + unlink (fname); + write (fd, zbuff, sizeof (zbuff)); + lseek (fd, 0, SEEK_SET); + map = mmap (0, sizeof (zbuff), PROT_READ, MAP_SHARED, fd, 0); + if (map == (char *) -1) return 2; + for (i = 0; fname[i]; i++) + { + if (write (fd, &fname[i], 1) != 1) return 3; + if (map[i] != fname[i]) return 4; + } + return 0; +} +EOF +if { (eval echo configure:11169: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + inn_cv_func_mmap_sees_writes=yes +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + inn_cv_func_mmap_sees_writes=no +fi +rm -fr conftest* +fi + +fi + +echo "$ac_t""$inn_cv_func_mmap_sees_writes" 1>&6 +if test $inn_cv_func_mmap_sees_writes = no ; then + cat >> confdefs.h <<\EOF +#define MMAP_MISSES_WRITES 1 +EOF + +fi + echo $ac_n "checking whether msync is needed""... $ac_c" 1>&6 +echo "configure:11191: checking whether msync is needed" >&5 +if eval "test \"`echo '$''{'inn_cv_func_mmap_need_msync'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + if test "$cross_compiling" = yes; then + inn_cv_func_mmap_need_msync=yes +else + cat > conftest.$ac_ext < +#include +#if STDC_HEADERS +# include +# include +#else +# if HAVE_STDLIB_H +# include +# endif +# if !HAVE_STRCHR +# define strchr index +# define strrchr rindex +# endif +#endif +#if HAVE_STRING_H +# if !STDC_HEADERS && HAVE_MEMORY_H +# include +# endif +# include +#else +# if HAVE_STRINGS_H +# include +# endif +#endif +#if HAVE_UNISTD_H +# include +#endif +#include +#include +#include + +int +main() +{ + int *data, *data2; + int i, fd; + + /* First, make a file with some known garbage in it. Use something + larger than one page but still an odd page size. */ + data = malloc (20000); + if (!data) return 1; + for (i = 0; i < 20000 / sizeof (int); i++) + data[i] = rand(); + umask (0); + fd = creat ("conftestmmaps", 0600); + if (fd < 0) return 1; + if (write (fd, data, 20000) != 20000) return 1; + close (fd); + + /* Next, try to mmap the file and make sure we see the same garbage. */ + fd = open ("conftestmmaps", O_RDWR); + if (fd < 0) return 1; + data2 = mmap (0, 20000, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + if (data2 == (int *) -1) return 1; + + /* Finally, see if changes made to the mmaped region propagate back to + the file as seen by read (meaning that msync isn't needed). */ + for (i = 0; i < 20000 / sizeof (int); i++) + data2[i]++; + if (read (fd, data, 20000) != 20000) return 1; + for (i = 0; i < 20000 / sizeof (int); i++) + if (data[i] != data2[i]) + return 1; + + close (fd); + unlink ("conftestmmapm"); + return 0; +} +EOF +if { (eval echo configure:11270: \"$ac_link\") 1>&5; (eval $ac_link) 2>&5; } && test -s conftest${ac_exeext} && (./conftest; exit) 2>/dev/null +then + inn_cv_func_mmap_need_msync=no +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -fr conftest* + inn_cv_func_mmap_need_msync=yes +fi +rm -fr conftest* +fi + +fi + +echo "$ac_t""$inn_cv_func_mmap_need_msync" 1>&6 +if test $inn_cv_func_mmap_need_msync = yes ; then + cat >> confdefs.h <<\EOF +#define MMAP_NEEDS_MSYNC 1 +EOF + +fi + echo $ac_n "checking how many arguments msync takes""... $ac_c" 1>&6 +echo "configure:11292: checking how many arguments msync takes" >&5 +if eval "test \"`echo '$''{'inn_cv_func_msync_args'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#include +int main() { +char *p; int psize; msync (p, psize, MS_ASYNC); +; return 0; } +EOF +if { (eval echo configure:11305: \"$ac_compile\") 1>&5; (eval $ac_compile) 2>&5; }; then + rm -rf conftest* + inn_cv_func_msync_args=3 +else + echo "configure: failed program was:" >&5 + cat conftest.$ac_ext >&5 + rm -rf conftest* + inn_cv_func_msync_args=2 +fi +rm -f conftest* +fi + +echo "$ac_t""$inn_cv_func_msync_args" 1>&6 +if test $inn_cv_func_msync_args = 3 ; then + cat >> confdefs.h <<\EOF +#define HAVE_MSYNC_3_ARG 1 +EOF + +fi +fi + + +echo $ac_n "checking for Unix domain sockets""... $ac_c" 1>&6 +echo "configure:11328: checking for Unix domain sockets" >&5 +if eval "test \"`echo '$''{'inn_cv_sys_unix_sockets'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#ifdef AF_UNIX +yes +#endif +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "yes" >/dev/null 2>&1; then + rm -rf conftest* + inn_cv_sys_unix_sockets=yes +else + rm -rf conftest* + inn_cv_sys_unix_sockets=no +fi +rm -f conftest* + +fi + +echo "$ac_t""$inn_cv_sys_unix_sockets" 1>&6 +if test $inn_cv_sys_unix_sockets = yes ; then + cat >> confdefs.h <<\EOF +#define HAVE_UNIX_DOMAIN_SOCKETS 1 +EOF + +fi + + +echo $ac_n "checking log facility for news""... $ac_c" 1>&6 +echo "configure:11362: checking log facility for news" >&5 +if eval "test \"`echo '$''{'inn_cv_log_facility'+set}'`\" = set"; then + echo $ac_n "(cached) $ac_c" 1>&6 +else + cat > conftest.$ac_ext < +#ifdef LOG_NEWS +yes +#endif +EOF +if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | + egrep "yes" >/dev/null 2>&1; then + rm -rf conftest* + inn_cv_log_facility=LOG_NEWS +else + rm -rf conftest* + inn_cv_log_facility=LOG_LOCAL1 +fi +rm -f conftest* + +fi + +if test x"$SYSLOG_FACILITY" = xnone ; then + SYSLOG_FACILITY=$inn_cv_log_facility +fi +echo "$ac_t""$SYSLOG_FACILITY" 1>&6 +cat >> confdefs.h <> confdefs.h < confcache <<\EOF +# This file is a shell script that caches the results of configure +# tests run on this system so they can be shared between configure +# scripts and configure runs. It is not useful on other systems. +# If it contains results you don't want to keep, you may remove or edit it. +# +# By default, configure uses ./config.cache as the cache file, +# creating it if it does not exist already. You can give configure +# the --cache-file=FILE option to use a different cache file; that is +# what configure does when it calls configure scripts in +# subdirectories, so they share the cache. +# Giving --cache-file=/dev/null disables caching, for debugging configure. +# config.status only pays attention to the cache file if you give it the +# --recheck option to rerun configure. +# +EOF +# The following way of writing the cache mishandles newlines in values, +# but we know of no workaround that is simple, portable, and efficient. +# So, don't put newlines in cache variables' values. +# Ultrix sh set writes to stderr and can't be redirected directly, +# and sets the high bit in the cache file unless we assign to the vars. +(set) 2>&1 | + case `(ac_space=' '; set | grep ac_space) 2>&1` in + *ac_space=\ *) + # `set' does not quote correctly, so add quotes (double-quote substitution + # turns \\\\ into \\, and sed turns \\ into \). + sed -n \ + -e "s/'/'\\\\''/g" \ + -e "s/^\\([a-zA-Z0-9_]*_cv_[a-zA-Z0-9_]*\\)=\\(.*\\)/\\1=\${\\1='\\2'}/p" + ;; + *) + # `set' quotes correctly as required by POSIX, so do not add quotes. + sed -n -e 's/^\([a-zA-Z0-9_]*_cv_[a-zA-Z0-9_]*\)=\(.*\)/\1=${\1=\2}/p' + ;; + esac >> confcache +if cmp -s $cache_file confcache; then + : +else + if test -w $cache_file; then + echo "updating cache $cache_file" + cat confcache > $cache_file + else + echo "not updating unwritable cache $cache_file" + fi +fi +rm -f confcache + +trap 'rm -fr conftest* confdefs* core core.* *.core $ac_clean_files; exit 1' 1 2 15 + +test "x$prefix" = xNONE && prefix=$ac_default_prefix +# Let make expand exec_prefix. +test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' + +# Any assignment to VPATH causes Sun make to only execute +# the first set of double-colon rules, so remove it if not needed. +# If there is a colon in the path, we need to keep it. +if test "x$srcdir" = x.; then + ac_vpsub='/^[ ]*VPATH[ ]*=[^:]*$/d' +fi + +trap 'rm -f $CONFIG_STATUS conftest*; exit 1' 1 2 15 + +DEFS=-DHAVE_CONFIG_H + +# Without the "./", some shells look in PATH for config.status. +: ${CONFIG_STATUS=./config.status} + +echo creating $CONFIG_STATUS +rm -f $CONFIG_STATUS +cat > $CONFIG_STATUS </dev/null | sed 1q`: +# +# $0 $ac_configure_args +# +# Compiler output produced by configure, useful for debugging +# configure, is in ./config.log if it exists. + +ac_cs_usage="Usage: $CONFIG_STATUS [--recheck] [--version] [--help]" +for ac_option +do + case "\$ac_option" in + -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) + echo "running \${CONFIG_SHELL-/bin/sh} $0 $ac_configure_args --no-create --no-recursion" + exec \${CONFIG_SHELL-/bin/sh} $0 $ac_configure_args --no-create --no-recursion ;; + -version | --version | --versio | --versi | --vers | --ver | --ve | --v) + echo "$CONFIG_STATUS generated by autoconf version 2.13" + exit 0 ;; + -help | --help | --hel | --he | --h) + echo "\$ac_cs_usage"; exit 0 ;; + *) echo "\$ac_cs_usage"; exit 1 ;; + esac +done + +ac_given_srcdir=$srcdir + +trap 'rm -fr `echo "Makefile.global + include/paths.h + samples/inn.conf + samples/innreport.conf + samples/newsfeeds + samples/sasl.conf + scripts/inncheck + scripts/innshellvars + scripts/innshellvars.pl + scripts/innshellvars.tcl + scripts/news.daily + support/fixscript + include/config.h" | sed "s/:[^ ]*//g"` conftest*; exit 1' 1 2 15 +EOF +cat >> $CONFIG_STATUS < conftest.subs <<\\CEOF +$ac_vpsub +$extrasub +s%@SHELL@%$SHELL%g +s%@CFLAGS@%$CFLAGS%g +s%@CPPFLAGS@%$CPPFLAGS%g +s%@CXXFLAGS@%$CXXFLAGS%g +s%@FFLAGS@%$FFLAGS%g +s%@DEFS@%$DEFS%g +s%@LDFLAGS@%$LDFLAGS%g +s%@LIBS@%$LIBS%g +s%@exec_prefix@%$exec_prefix%g +s%@prefix@%$prefix%g +s%@program_transform_name@%$program_transform_name%g +s%@bindir@%$bindir%g +s%@sbindir@%$sbindir%g +s%@libexecdir@%$libexecdir%g +s%@datadir@%$datadir%g +s%@sysconfdir@%$sysconfdir%g +s%@sharedstatedir@%$sharedstatedir%g +s%@localstatedir@%$localstatedir%g +s%@libdir@%$libdir%g +s%@includedir@%$includedir%g +s%@oldincludedir@%$oldincludedir%g +s%@infodir@%$infodir%g +s%@mandir@%$mandir%g +s%@builddir@%$builddir%g +s%@CC@%$CC%g +s%@CPP@%$CPP%g +s%@OBJEXT@%$OBJEXT%g +s%@host@%$host%g +s%@host_alias@%$host_alias%g +s%@host_cpu@%$host_cpu%g +s%@host_vendor@%$host_vendor%g +s%@host_os@%$host_os%g +s%@build@%$build%g +s%@build_alias@%$build_alias%g +s%@build_cpu@%$build_cpu%g +s%@build_vendor@%$build_vendor%g +s%@build_os@%$build_os%g +s%@LN_S@%$LN_S%g +s%@EXEEXT@%$EXEEXT%g +s%@ECHO@%$ECHO%g +s%@RANLIB@%$RANLIB%g +s%@STRIP@%$STRIP%g +s%@LIBTOOL@%$LIBTOOL%g +s%@EXTLIB@%$EXTLIB%g +s%@EXTOBJ@%$EXTOBJ%g +s%@LIBTOOLCC@%$LIBTOOLCC%g +s%@LIBTOOLLD@%$LIBTOOLLD%g +s%@CCOUTPUT@%$CCOUTPUT%g +s%@CONTROLDIR@%$CONTROLDIR%g +s%@DBDIR@%$DBDIR%g +s%@DOCDIR@%$DOCDIR%g +s%@ETCDIR@%$ETCDIR%g +s%@FILTERDIR@%$FILTERDIR%g +s%@LIBDIR@%$LIBDIR%g +s%@LOGDIR@%$LOGDIR%g +s%@RUNDIR@%$RUNDIR%g +s%@SPOOLDIR@%$SPOOLDIR%g +s%@tmpdir@%$tmpdir%g +s%@NEWSUSER@%$NEWSUSER%g +s%@NEWSGRP@%$NEWSGRP%g +s%@NEWSMASTER@%$NEWSMASTER%g +s%@NEWSUMASK@%$NEWSUMASK%g +s%@FILEMODE@%$FILEMODE%g +s%@DIRMODE@%$DIRMODE%g +s%@RUNDIRMODE@%$RUNDIRMODE%g +s%@INEWSMODE@%$INEWSMODE%g +s%@RNEWSGRP@%$RNEWSGRP%g +s%@RNEWSMODE@%$RNEWSMODE%g +s%@LOG_COMPRESS@%$LOG_COMPRESS%g +s%@LOG_COMPRESSEXT@%$LOG_COMPRESSEXT%g +s%@DO_DBZ_TAGGED_HASH@%$DO_DBZ_TAGGED_HASH%g +s%@HOSTNAME@%$HOSTNAME%g +s%@LEX@%$LEX%g +s%@LEXLIB@%$LEXLIB%g +s%@SET_MAKE@%$SET_MAKE%g +s%@YACC@%$YACC%g +s%@CTAGS@%$CTAGS%g +s%@_PATH_AWK@%$_PATH_AWK%g +s%@_PATH_EGREP@%$_PATH_EGREP%g +s%@_PATH_PERL@%$_PATH_PERL%g +s%@_PATH_SH@%$_PATH_SH%g +s%@_PATH_SED@%$_PATH_SED%g +s%@_PATH_SORT@%$_PATH_SORT%g +s%@_PATH_UUX@%$_PATH_UUX%g +s%@PATH_GPGV@%$PATH_GPGV%g +s%@_PATH_PGP@%$_PATH_PGP%g +s%@pgpverify@%$pgpverify%g +s%@GETFTP@%$GETFTP%g +s%@COMPRESS@%$COMPRESS%g +s%@GZIP@%$GZIP%g +s%@UNCOMPRESS@%$UNCOMPRESS%g +s%@SENDMAIL@%$SENDMAIL%g +s%@HAVE_UUSTAT@%$HAVE_UUSTAT%g +s%@_PATH_PYTHON@%$_PATH_PYTHON%g +s%@CRYPT_LIB@%$CRYPT_LIB%g +s%@SHADOW_LIB@%$SHADOW_LIB%g +s%@PAM_LIB@%$PAM_LIB%g +s%@REGEX_LIB@%$REGEX_LIB%g +s%@BERKELEY_DB_LDFLAGS@%$BERKELEY_DB_LDFLAGS%g +s%@BERKELEY_DB_CFLAGS@%$BERKELEY_DB_CFLAGS%g +s%@BERKELEY_DB_LIB@%$BERKELEY_DB_LIB%g +s%@DBM_LIB@%$DBM_LIB%g +s%@DBM_INC@%$DBM_INC%g +s%@SSL_BIN@%$SSL_BIN%g +s%@SSL_INC@%$SSL_INC%g +s%@SSL_LIB@%$SSL_LIB%g +s%@SASL_INC@%$SASL_INC%g +s%@SASL_LIB@%$SASL_LIB%g +s%@KRB5_AUTH@%$KRB5_AUTH%g +s%@KRB5_INC@%$KRB5_INC%g +s%@KRB5_LIB@%$KRB5_LIB%g +s%@PERL_INC@%$PERL_INC%g +s%@PERL_LIB@%$PERL_LIB%g +s%@PYTHON_LIB@%$PYTHON_LIB%g +s%@PYTHON_INC@%$PYTHON_INC%g +s%@GETCONF@%$GETCONF%g +s%@LFS_CFLAGS@%$LFS_CFLAGS%g +s%@LFS_LDFLAGS@%$LFS_LDFLAGS%g +s%@LFS_LIBS@%$LFS_LIBS%g +s%@LIBOBJS@%$LIBOBJS%g +s%@SYSLOG_FACILITY@%$SYSLOG_FACILITY%g + +CEOF +EOF + +cat >> $CONFIG_STATUS <<\EOF + +# Split the substitutions into bite-sized pieces for seds with +# small command number limits, like on Digital OSF/1 and HP-UX. +ac_max_sed_cmds=90 # Maximum number of lines to put in a sed script. +ac_file=1 # Number of current file. +ac_beg=1 # First line for current file. +ac_end=$ac_max_sed_cmds # Line after last line for current file. +ac_more_lines=: +ac_sed_cmds="" +while $ac_more_lines; do + if test $ac_beg -gt 1; then + sed "1,${ac_beg}d; ${ac_end}q" conftest.subs > conftest.s$ac_file + else + sed "${ac_end}q" conftest.subs > conftest.s$ac_file + fi + if test ! -s conftest.s$ac_file; then + ac_more_lines=false + rm -f conftest.s$ac_file + else + if test -z "$ac_sed_cmds"; then + ac_sed_cmds="sed -f conftest.s$ac_file" + else + ac_sed_cmds="$ac_sed_cmds | sed -f conftest.s$ac_file" + fi + ac_file=`expr $ac_file + 1` + ac_beg=$ac_end + ac_end=`expr $ac_end + $ac_max_sed_cmds` + fi +done +if test -z "$ac_sed_cmds"; then + ac_sed_cmds=cat +fi +EOF + +cat >> $CONFIG_STATUS <> $CONFIG_STATUS <<\EOF +for ac_file in .. $CONFIG_FILES; do if test "x$ac_file" != x..; then + # Support "outfile[:infile[:infile...]]", defaulting infile="outfile.in". + case "$ac_file" in + *:*) ac_file_in=`echo "$ac_file"|sed 's%[^:]*:%%'` + ac_file=`echo "$ac_file"|sed 's%:.*%%'` ;; + *) ac_file_in="${ac_file}.in" ;; + esac + + # Adjust a relative srcdir, top_srcdir, and INSTALL for subdirectories. + + # Remove last slash and all that follows it. Not all systems have dirname. + ac_dir=`echo $ac_file|sed 's%/[^/][^/]*$%%'` + if test "$ac_dir" != "$ac_file" && test "$ac_dir" != .; then + # The file is in a subdirectory. + test ! -d "$ac_dir" && mkdir "$ac_dir" + ac_dir_suffix="/`echo $ac_dir|sed 's%^\./%%'`" + # A "../" for each directory in $ac_dir_suffix. + ac_dots=`echo $ac_dir_suffix|sed 's%/[^/]*%../%g'` + else + ac_dir_suffix= ac_dots= + fi + + case "$ac_given_srcdir" in + .) srcdir=. + if test -z "$ac_dots"; then top_srcdir=. + else top_srcdir=`echo $ac_dots|sed 's%/$%%'`; fi ;; + /*) srcdir="$ac_given_srcdir$ac_dir_suffix"; top_srcdir="$ac_given_srcdir" ;; + *) # Relative path. + srcdir="$ac_dots$ac_given_srcdir$ac_dir_suffix" + top_srcdir="$ac_dots$ac_given_srcdir" ;; + esac + + + echo creating "$ac_file" + rm -f "$ac_file" + configure_input="Generated automatically from `echo $ac_file_in|sed 's%.*/%%'` by configure." + case "$ac_file" in + *Makefile*) ac_comsub="1i\\ +# $configure_input" ;; + *) ac_comsub= ;; + esac + + ac_file_inputs=`echo $ac_file_in|sed -e "s%^%$ac_given_srcdir/%" -e "s%:% $ac_given_srcdir/%g"` + sed -e "$ac_comsub +s%@configure_input@%$configure_input%g +s%@srcdir@%$srcdir%g +s%@top_srcdir@%$top_srcdir%g +" $ac_file_inputs | (eval "$ac_sed_cmds") > $ac_file +fi; done +rm -f conftest.s* + +# These sed commands are passed to sed as "A NAME B NAME C VALUE D", where +# NAME is the cpp macro being defined and VALUE is the value it is being given. +# +# ac_d sets the value in "#define NAME VALUE" lines. +ac_dA='s%^\([ ]*\)#\([ ]*define[ ][ ]*\)' +ac_dB='\([ ][ ]*\)[^ ]*%\1#\2' +ac_dC='\3' +ac_dD='%g' +# ac_u turns "#undef NAME" with trailing blanks into "#define NAME VALUE". +ac_uA='s%^\([ ]*\)#\([ ]*\)undef\([ ][ ]*\)' +ac_uB='\([ ]\)%\1#\2define\3' +ac_uC=' ' +ac_uD='\4%g' +# ac_e turns "#undef NAME" without trailing blanks into "#define NAME VALUE". +ac_eA='s%^\([ ]*\)#\([ ]*\)undef\([ ][ ]*\)' +ac_eB='$%\1#\2define\3' +ac_eC=' ' +ac_eD='%g' + +if test "${CONFIG_HEADERS+set}" != set; then +EOF +cat >> $CONFIG_STATUS <> $CONFIG_STATUS <<\EOF +fi +for ac_file in .. $CONFIG_HEADERS; do if test "x$ac_file" != x..; then + # Support "outfile[:infile[:infile...]]", defaulting infile="outfile.in". + case "$ac_file" in + *:*) ac_file_in=`echo "$ac_file"|sed 's%[^:]*:%%'` + ac_file=`echo "$ac_file"|sed 's%:.*%%'` ;; + *) ac_file_in="${ac_file}.in" ;; + esac + + echo creating $ac_file + + rm -f conftest.frag conftest.in conftest.out + ac_file_inputs=`echo $ac_file_in|sed -e "s%^%$ac_given_srcdir/%" -e "s%:% $ac_given_srcdir/%g"` + cat $ac_file_inputs > conftest.in + +EOF + +# Transform confdefs.h into a sed script conftest.vals that substitutes +# the proper values into config.h.in to produce config.h. And first: +# Protect against being on the right side of a sed subst in config.status. +# Protect against being in an unquoted here document in config.status. +rm -f conftest.vals +cat > conftest.hdr <<\EOF +s/[\\&%]/\\&/g +s%[\\$`]%\\&%g +s%#define \([A-Za-z_][A-Za-z0-9_]*\) *\(.*\)%${ac_dA}\1${ac_dB}\1${ac_dC}\2${ac_dD}%gp +s%ac_d%ac_u%gp +s%ac_u%ac_e%gp +EOF +sed -n -f conftest.hdr confdefs.h > conftest.vals +rm -f conftest.hdr + +# This sed command replaces #undef with comments. This is necessary, for +# example, in the case of _POSIX_SOURCE, which is predefined and required +# on some systems where configure will not decide to define it. +cat >> conftest.vals <<\EOF +s%^[ ]*#[ ]*undef[ ][ ]*[a-zA-Z_][a-zA-Z_0-9]*%/* & */% +EOF + +# Break up conftest.vals because some shells have a limit on +# the size of here documents, and old seds have small limits too. + +rm -f conftest.tail +while : +do + ac_lines=`grep -c . conftest.vals` + # grep -c gives empty output for an empty file on some AIX systems. + if test -z "$ac_lines" || test "$ac_lines" -eq 0; then break; fi + # Write a limited-size here document to conftest.frag. + echo ' cat > conftest.frag <> $CONFIG_STATUS + sed ${ac_max_here_lines}q conftest.vals >> $CONFIG_STATUS + echo 'CEOF + sed -f conftest.frag conftest.in > conftest.out + rm -f conftest.in + mv conftest.out conftest.in +' >> $CONFIG_STATUS + sed 1,${ac_max_here_lines}d conftest.vals > conftest.tail + rm -f conftest.vals + mv conftest.tail conftest.vals +done +rm -f conftest.vals + +cat >> $CONFIG_STATUS <<\EOF + rm -f conftest.frag conftest.h + echo "/* $ac_file. Generated automatically by configure. */" > conftest.h + cat conftest.in >> conftest.h + rm -f conftest.in + if cmp -s $ac_file conftest.h 2>/dev/null; then + echo "$ac_file is unchanged" + rm -f conftest.h + else + # Remove last slash and all that follows it. Not all systems have dirname. + ac_dir=`echo $ac_file|sed 's%/[^/][^/]*$%%'` + if test "$ac_dir" != "$ac_file" && test "$ac_dir" != .; then + # The file is in a subdirectory. + test ! -d "$ac_dir" && mkdir "$ac_dir" + fi + rm -f $ac_file + mv conftest.h $ac_file + fi +fi; done + +EOF +cat >> $CONFIG_STATUS <> $CONFIG_STATUS <<\EOF +chmod +x support/fixscript + +exit 0 +EOF +chmod +x $CONFIG_STATUS +rm -fr confdefs* $ac_clean_files +test "$no_create" = yes || ${CONFIG_SHELL-/bin/sh} $CONFIG_STATUS || exit 1 + + +cat < /dev/null ; then + : +else + cat <. Due to +dnl the submission format and significant changes to autoconf's internal +dnl architecture and building-block macros, I'm waiting until INN is switched +dnl to autoconf 2.52 or later and we can convert this file into a bunch of +dnl separate files before submitting macros to that archive. +dnl +dnl If a check is any way non-trivial, please package it up in a macro with +dnl AC_DEFUN. This will allow us to easily break up this (far too long) file +dnl into a directory full of .m4 files for particular checks once we switch to +dnl autoconf 2.52 or later. Please also put any long code blocks into a +dnl separate macro rather than in-line in the test macro; this will make +dnl quoting issues much easier. See the existing tests for details on how to +dnl do this. +dnl +dnl Try to do as much with AC_DEFINE and as little with AC_SUBST as is +dnl reasonable; obviously, makefile things like library paths and so forth and +dnl paths to programs have to use AC_SUBST, but any compile-time parameters +dnl are easier to handle with AC_DEFINE. (And AC_SUBST is slower.) +dnl +dnl And remember: If you don't have any alternative available if your check +dnl for something fails, and there's no purpose served in aborting configure +dnl instead of the compile if what you're checking for is missing, don't +dnl bother checking for it. Compile-time errors often produce a lot more +dnl useful information for someone debugging a problem than configure-time +dnl errors. + +AC_REVISION($Revision: 7811 $)dnl +AC_PREREQ(2.13) +AC_INIT(Makefile.global.in) +AC_CONFIG_AUX_DIR(support) +AC_PREFIX_DEFAULT(/usr/local/news) + +dnl Make sure $prefix is set so that we can use it internally. +test x"$prefix" = xNONE && prefix="$ac_default_prefix" + +dnl Linking against in-tree libraries need to know the current directory (the +dnl top of the build directory, not the source directory). +builddir=`pwd` +AC_SUBST(builddir) + +dnl Earlier versions of INN used --with-largefiles, which was the wrong flag +dnl from the perspective of what --with and --enable are supposed to mean. +dnl Catch the old usage and error out. +if test x"$with_largefiles" != x ; then + AC_MSG_ERROR([Use --enable-largefiles instead of --with-largefiles]) +fi + +dnl Used to check whether -o can be provided with -c with the chosen +dnl compiler. We need this if we're not using libtool so that object files +dnl can be built in subdirectories. This macro is stolen shamelessly from +dnl the libtool macros; there's a better version in Autoconf that we should +dnl eventually use that tests twice. +AC_DEFUN([INN_PROG_CC_C_O], +[AC_REQUIRE([AC_OBJEXT]) +AC_MSG_CHECKING([if $CC supports -c -o file.$ac_objext]) +AC_CACHE_VAL([inn_cv_compiler_c_o], +[rm -f -r conftest 2>/dev/null +mkdir conftest +cd conftest +echo "int some_variable = 0;" > conftest.$ac_ext +mkdir out +# According to Tom Tromey, Ian Lance Taylor reported there are C compilers +# that will create temporary files in the current directory regardless of +# the output directory. Thus, making CWD read-only will cause this test +# to fail, enabling locking or at least warning the user not to do parallel +# builds. +chmod -w . +save_CFLAGS="$CFLAGS" +CFLAGS="$CFLAGS -o out/conftest2.$ac_objext" +compiler_c_o=no +if { (eval $ac_compile) 2> out/conftest.err; } \ + && test -s out/conftest2.$ac_objext; then + # The compiler can only warn and ignore the option if not recognized + # So say no if there are warnings + if test -s out/conftest.err; then + inn_cv_compiler_c_o=no + else + inn_cv_compiler_c_o=yes + fi +else + # Append any errors to the config.log. + cat out/conftest.err 1>&AC_FD_CC + inn_cv_compiler_c_o=no +fi +CFLAGS="$save_CFLAGS" +chmod u+w . +rm -f conftest* out/* +rmdir out +cd .. +rmdir conftest +rm -f -r conftest 2>/dev/null]) +compiler_c_o=$inn_cv_compiler_c_o +AC_MSG_RESULT([$compiler_c_o])]) + +dnl A few tests need to happen before any of the libtool tests in order to +dnl avoid error messages. We therefore lift them up to the top of the file. +AC_PROG_CC +AC_AIX +AC_ISC_POSIX +INN_PROG_CC_C_O + +dnl Check to see if the user wants to use libtool. We only invoke the libtool +dnl setup macros if they do. Keep this call together with the libtool setup +dnl so that the arguments to configure will be together in configure --help. +inn_use_libtool=no +AC_ARG_ENABLE(libtool, + [ --enable-libtool Use libtool for lib generation [default=no]], + if test "$enableval" = yes ; then + inn_use_libtool=yes + fi) +if test x"$inn_use_libtool" = xyes ; then + AC_PROG_LIBTOOL + EXTLIB='la' + EXTOBJ='lo' + LIBTOOL='$(top)/libtool' + LIBTOOLCC='$(top)/libtool --mode=compile' + LIBTOOLLD='$(top)/libtool --mode=link' + CCOUTPUT='-c -o $@ $<' +else + AC_CANONICAL_HOST + EXTLIB='a' + EXTOBJ='o' + LIBTOOL='' + LIBTOOLCC='' + LIBTOOLLD='' + if test x"$compiler_c_o" = xyes ; then + CCOUTPUT='-c -o $@ $<' + else + CCOUTPUT='-c $< && if test x"$(@F)" != x"$@" ; then mv $(@F) $@ ; fi' + fi + AC_SUBST(LIBTOOL) +fi +AC_SUBST(EXTLIB) +AC_SUBST(EXTOBJ) +AC_SUBST(LIBTOOLCC) +AC_SUBST(LIBTOOLLD) +AC_SUBST(CCOUTPUT) + +dnl INN has quite a few more configurable paths than autoconf supports by +dnl default. For right now, those additional paths are configured with +dnl --with-*-dir options. This is the generic macro for those arguments; it +dnl takes the name of the directory, the path relative to $prefix if none +dnl given to configure, the variable to set, and the help string. +AC_DEFUN([INN_ARG_DIR], +[AC_ARG_WITH([$1-dir], [$4], [$3=$with_$1_dir], [$3=$prefix/$2]) +AC_SUBST($3)]) + +dnl And here are all the paths. +dnl +dnl FIXME: We should honor bindir, libdir, includedir, and mandir at the +dnl least, and we should use libdir over --with-lib-dir. +INN_ARG_DIR(control, bin/control, CONTROLDIR, + [ --with-control-dir=PATH Path for control programs [PREFIX/bin/control]]) +INN_ARG_DIR(db, db, DBDIR, + [ --with-db-dir=PATH Path for news database files [PREFIX/db]]) +INN_ARG_DIR(doc, doc, DOCDIR, + [ --with-doc-dir=PATH Path for news documentation [PREFIX/doc]]) +INN_ARG_DIR(etc, etc, ETCDIR, + [ --with-etc-dir=PATH Path for news config files [PREFIX/etc]]) +INN_ARG_DIR(filter, bin/filter, FILTERDIR, + [ --with-filter-dir=PATH Path for embedded filters [PREFIX/bin/filter]]) +INN_ARG_DIR(lib, lib, LIBDIR, + [ --with-lib-dir=PATH Path for news library files [PREFIX/lib]]) +INN_ARG_DIR(log, log, LOGDIR, + [ --with-log-dir=PATH Path for news logs [PREFIX/log]]) +INN_ARG_DIR(run, run, RUNDIR, + [ --with-run-dir=PATH Path for news PID/runtime files [PREFIX/run]]) +INN_ARG_DIR(spool, spool, SPOOLDIR, + [ --with-spool-dir=PATH Path for news storage [PREFIX/spool]]) +INN_ARG_DIR(tmp, tmp, tmpdir, + [ --with-tmp-dir=PATH Path for temporary files [PREFIX/tmp]]) + +dnl This is actually given to AC_SUBST later on when we check whether the +dnl system has the LOG_NEWS facility. +AC_ARG_WITH(syslog-facility, +[ --with-syslog-facility=LOG_FAC Syslog facility [LOG_NEWS or LOG_LOCAL1]], + SYSLOG_FACILITY=$with_syslog_facility, + SYSLOG_FACILITY=none) + +dnl INN allows the user and group INN will run as to be specified, as well as +dnl the user to receive nightly reports and the like. These are all fairly +dnl similar, so factor the commonality into this macro. Takes the name of +dnl what we're looking for, the default, the variable to set, the help string, +dnl and the comment for config.h. +AC_DEFUN([INN_ARG_USER], +[AC_ARG_WITH([news-$1], [$4], [$3=$with_news_$1], [$3=$2]) +AC_SUBST($3) +AC_DEFINE_UNQUOTED($3, "$[$3]", [$5])]) + +dnl And here they are. +INN_ARG_USER(user, news, NEWSUSER, + [ --with-news-user=USER News user name [news]], + [The user that INN should run as.]) +INN_ARG_USER(group, news, NEWSGRP, + [ --with-news-group=GROUP News group name [news]], + [The group that INN should run as.]) +INN_ARG_USER(master, usenet, NEWSMASTER, + [ --with-news-master=USER News master (address for reports) [usenet]], + [The user who gets all INN-related e-mail.]) + +dnl INN defaults to a umask of 002 with the corresponding directory and file +dnl permissions, mostly for historical reasons. If the user sets the umask to +dnl something else, change all of the permissions. +NEWSUMASK=02 +FILEMODE=0664 +DIRMODE=0775 +RUNDIRMODE=0770 +AC_ARG_WITH(news-umask, + [ --with-news-umask=UMASK umask for news files [002]], + with_news_umask=`echo "$with_news_umask" | sed 's/^0*//'` + if test "x$with_news_umask" = x22 ; then + NEWSUMASK=022 + FILEMODE=0644 + DIRMODE=0755 + RUNDIRMODE=0750 + else + if test "x$with_news_umask" != x2 ; then + AC_MSG_ERROR(Valid umasks are 02 or 022) + fi + fi) +AC_SUBST(NEWSUMASK) +AC_SUBST(FILEMODE) +AC_SUBST(DIRMODE) +AC_SUBST(RUNDIRMODE) +AC_DEFINE_UNQUOTED(ARTFILE_MODE, $FILEMODE, + [Mode that incoming articles are created with.]) +AC_DEFINE_UNQUOTED(BATCHFILE_MODE, $FILEMODE, + [Mode that batch files are created with.]) +AC_DEFINE_UNQUOTED(GROUPDIR_MODE, $DIRMODE, + [Mode that directories are created with.]) +AC_DEFINE_UNQUOTED(NEWSUMASK, $NEWSUMASK, + [The umask used by all INN programs.]) + +dnl inews used to be installed setgid, but may not be secure. Only do this if +dnl it's explicitly requested at configure time. +INEWSMODE=0550 +AC_ARG_ENABLE(setgid-inews, + [ --enable-setgid-inews Install inews setgid], + if test "x$enableval" = xyes ; then + INEWSMODE=02555 + fi) +AC_SUBST(INEWSMODE) + +dnl rnews used to be installed setuid root so that it could be run by the uucp +dnl user for incoming batches, but this isn't necessary unless you're using +dnl UUCP (which most people aren't) and only setuid news is required. Only do +dnl this if it's explicitly requested at configure time. +RNEWSGRP=$NEWSGRP +RNEWSMODE=0500 +AC_ARG_ENABLE(uucp-rnews, + [ --enable-uucp-rnews Install rnews setuid, group uucp], + if test "x$enableval" = xyes ; then + RNEWSGRP=uucp + RNEWSMODE=04550 + fi) +AC_SUBST(RNEWSGRP) +AC_SUBST(RNEWSMODE) + +dnl Choose the log compression method; the argument should not be a full path, +dnl just the name of the compression type. +AC_ARG_WITH(log-compress, + [ --with-log-compress=METHOD Log compression method [gzip]], + LOG_COMPRESS=$with_log_compress, + LOG_COMPRESS=gzip) +case "$LOG_COMPRESS" in +bzip2) LOG_COMPRESSEXT=".bz2" ;; +gzip) LOG_COMPRESSEXT=".gz" ;; +*) LOG_COMPRESSEXT=".Z" ;; +esac +AC_SUBST(LOG_COMPRESS) +AC_SUBST(LOG_COMPRESSEXT) + +dnl inndstart by default only allows ports 119 and 433 below 1024; if the user +dnl wants to use some other port as well, they must use this option. +AC_ARG_WITH(innd-port, + [ --with-innd-port=PORT Additional low-numbered port for inndstart], + AC_DEFINE_UNQUOTED(INND_PORT, $with_innd_port, + [Additional valid low-numbered port for inndstart.])) + +dnl By default, we omit all IPv6 support. This option enables it. +AC_ARG_ENABLE(ipv6, + [ --enable-ipv6 Enable IPv6 support], + if test "x$enableval" = xyes ; then + inn_enable_ipv6_tests=yes + AC_DEFINE(HAVE_INET6, 1, [Define to enable IPv6 support.]) + fi) + +dnl Maximum number of sockets that can be listened on. +AC_ARG_WITH(max-sockets, + [ --with-max-sockets=N Maximum number of listening sockets in innd],, + [with_max_sockets=15]) +AC_DEFINE_UNQUOTED(MAX_SOCKETS, $with_max_sockets, + [Maximum number of sockets that innd can listen on.]) + +dnl This will eventually be a runtime option rather than a compile-time +dnl option. +AC_ARG_ENABLE(tagged-hash, + [ --enable-tagged-hash Use tagged hash table for history], + if test "x$enableval" = xyes ; then + DO_DBZ_TAGGED_HASH=DO + AC_DEFINE(DO_TAGGED_HASH, 1, + [Define to use tagged hash for the history file.]) + else + DO_DBZ_TAGGED_HASH=DONT + fi) +AC_SUBST(DO_DBZ_TAGGED_HASH) + +dnl Whether to enable the keyword generation code in innd. Use of this code +dnl requires a regular expression library, which is checked for later on. +inn_enable_keywords=0 +AC_ARG_ENABLE(keywords, + [ --enable-keywords Automatic keyword generation support], + if test x"$enableval" = xyes ; then + inn_enable_keywords=1 + fi) +AC_DEFINE_UNQUOTED(DO_KEYWORDS, $inn_enable_keywords, + [Define to 1 to compile in support for keyword generation code.]) + +dnl Whether to use the OS flags to enable large file support. Ideally this +dnl should just always be turned on if possible and the various parts of INN +dnl that read off_t's from disk should adjust somehow to the size, but INN +dnl isn't there yet. Currently tagged hash doesn't work with large file +dnl support due to assumptions about the size of off_t. +AC_ARG_ENABLE(largefiles, + [ --enable-largefiles Support for files larger than 2GB [default=no]], + [case "${enableval}" in + yes) inn_enable_largefiles=yes + if test x"$DO_DBZ_TAGGED_HASH" = xDO ; then +AC_MSG_ERROR([--enable-tagged-hash conflicts with --enable-largefiles.]) + fi ;; + no) inn_enable_largefiles=no ;; + *) AC_MSG_ERROR(invalid argument to --enable-largefiles) ;; + esac]) + +dnl Override the automatically detected path to sendmail. Used later on. +AC_ARG_WITH(sendmail, + [ --with-sendmail=PATH Path to sendmail], + SENDMAIL=$with_sendmail) + +dnl Specify the path to the Kerberos libraries if the user wants to build +dnl auth_krb5. Note that we don't search far and wide for the libraries if +dnl the user doesn't specify the path. +AC_ARG_WITH(kerberos, + [ --with-kerberos=PATH Path to Kerberos v5 (for auth_krb5)], + [if test x"$with_kerberos" != xno ; then + KRB5_LDFLAGS="-L$with_kerberos/lib" + KRB5_INC="-I$with_kerberos/include" + fi]) + +dnl Checks for embedded interpretors. +dnl +dnl FIXME: These should ideally be combined with the later logic to +dnl determine the version, determine the compiler and linker flags, etc. +AC_ARG_WITH(perl, + [ --with-perl Embedded Perl script support [default=no]], + [case "${withval}" in + yes) DO_PERL=DO + AC_DEFINE(DO_PERL, 1, [Define to compile in Perl script support.]) + ;; + no) DO_PERL=DONT ;; + *) AC_MSG_ERROR(invalid argument to --with-perl) ;; + esac], + DO_PERL=DONT) + +AC_ARG_WITH(python, + [ --with-python Embedded Python module support [default=no]], + [case "${withval}" in + yes) DO_PYTHON=define + AC_DEFINE(DO_PYTHON, 1, + [Define to compile in Python module support.]) + ;; + no) DO_PYTHON=DONT ;; + *) AC_MSG_ERROR(invalid argument to --with-python) ;; + esac], + DO_PYTHON=DONT) + +dnl Set some configuration file defaults from the machine hostname. +HOSTNAME=`hostname 2> /dev/null || uname -n` +AC_SUBST(HOSTNAME) + +dnl Checks for programs. +AC_PROG_GCC_TRADITIONAL +AC_PROG_LEX +AC_PROG_MAKE_SET +AC_PROG_RANLIB +AC_PROG_YACC + +dnl On MacOS X Server, -traditional-cpp is needed for gcc for compiling as +dnl well as preprocessing according to Miro Jurisic . +case "$CPP" in +*-traditional-cpp*) + CFLAGS="-traditional-cpp $CFLAGS" + ;; +esac + +case "$host" in + +dnl HP-UX's native compiler needs a special flag to turn on ANSI, and needs +dnl -g on link as well as compile for debugging to work. +*hpux*) + if test x"$GCC" != xyes ; then + dnl Need flag to turn on ANSI. + CFLAGS="$CFLAGS -Ae" + + dnl Need -g on link command for debug to work properly. + case "$CFLAGS" in + *-g*) + LDFLAGS="$LDFLAGS -g" + ;; + esac + fi + ;; + +dnl OSX needs '-multiply_defined suppress' +*darwin*) + LDFLAGS="$LDFLAGS -multiply_defined suppress" + ;; + +dnl From Boyd Gerber , needed in some cases to compile +dnl the bison-generated parser for innfeed.conf. +*UnixWare*|*unixware*|*-sco3*) + if test x"$GCC" != xyes ; then + CFLAGS="$CFLAGS -Kalloca" + fi +esac + +dnl Checks for pathnames. + +dnl See if we have ctags; if so, set CTAGS to its full path plus the -t -w +dnl options. Otherwise, set CTAGS to echo. +AC_PATH_PROG(CTAGS, ctags, echo) +if test x"$CTAGS" != xecho ; then + CTAGS="$CTAGS -t -w" +fi + +dnl Use INN_PATH_PROG if it's an error not to find a program. +AC_DEFUN([INN_ENSURE_PATH_PROG], +[AC_PATH_PROG($1, $2) +if test -z "${$1}" ; then + AC_MSG_ERROR($2 was not found in path and is required) +fi]) + +INN_ENSURE_PATH_PROG(_PATH_AWK,awk) +INN_ENSURE_PATH_PROG(_PATH_EGREP,egrep) +INN_ENSURE_PATH_PROG(_PATH_PERL,perl) +INN_ENSURE_PATH_PROG(_PATH_SH,sh) +INN_ENSURE_PATH_PROG(_PATH_SED,sed) +INN_ENSURE_PATH_PROG(_PATH_SORT,sort) +AC_PATH_PROGS(_PATH_UUX,uux,uux) + +dnl Check for a required version of Perl. The separate shell variable and +dnl the changequotes are necessary for autoconf 2.13; autoconf 2.50 will +dnl provide a different interface that will allow this to work correctly. +changequote(<<,>>)dnl +inn_perl_command='print $]' +changequote([,])dnl +AC_DEFUN([INN_PERL_VERSION], +[AC_CACHE_CHECK(for Perl version, inn_cv_perl_version, +[if $_PATH_PERL -e 'require $1;' > /dev/null 2>&1 ; then + inn_cv_perl_version=`$_PATH_PERL -e "$inn_perl_command"` +else + AC_MSG_ERROR(Perl $1 or greater is required) +fi])]) + +dnl Embedded Perl requires 5.004. controlchan requires 5.004_03. Other +dnl things may work with 5.003, but make 5.004_03 the minimum level; anyone +dnl should really have at least that these days. +INN_PERL_VERSION(5.004_03) + +dnl Look for PGP 5.0's pgpv, then pgp, then pgpgpg (not sure why anyone would +dnl have pgpgpg and not gpgv, but it doesn't hurt). Separately look for +dnl GnuPG (which we prefer). +pgpverify=true +AC_PATH_PROGS(PATH_GPGV, gpgv) +AC_PATH_PROGS(_PATH_PGP, pgpv pgp pgpgpg) +if test -z "$_PATH_PGP" && test -z "$PATH_GPGV" ; then + pgpverify=false +fi +AC_SUBST(pgpverify) + +dnl Look for a program that takes an ftp URL as a command line argument and +dnl retrieves the file to the current directory. Shame we can't also use +dnl lynx -source; it only writes to stdout. ncftp as of version 3 doesn't +dnl support this any more (it comes with ncftpget instead), but if someone +dnl has ncftp and not ncftpget they have an earlier version. +AC_PATH_PROGS(GETFTP, wget ncftpget ncftp, $prefix/bin/simpleftp) + +dnl Look for both compress and gzip, since the UUCP batching scripts require +dnl both. If we're using a log compression method other than compress or +dnl gzip, check for it too and make sure whatever log compressor we're using +dnl exists. If we don't find compress or gzip for the UUCP scripts, just +dnl use the bare program names in the hope that the path will be better at +dnl the time the script runs (or that the script will never run). +case "$LOG_COMPRESS" in +compress|gzip) ;; +*) INN_ENSURE_PATH_PROG(LOG_COMPRESS, "$LOG_COMPRESS") +esac +AC_PATH_PROG(COMPRESS, compress, compress) +if test x"$LOG_COMPRESS" = xcompress ; then + if test x"$COMPRESS" = xcompress ; then + AC_MSG_ERROR(compress not found but specified for log compression) + fi + LOG_COMPRESS="$COMPRESS" +fi +AC_PATH_PROG(GZIP, gzip, gzip) +if test x"$LOG_COMPRESS" = xgzip ; then + if test x"$GZIP" = xgzip ; then + AC_MSG_ERROR(gzip not found but specified for log compression) + fi + LOG_COMPRESS="$GZIP" +fi + +dnl Figure out what program to use to uncompress .Z files. On systems that +dnl have gzip but don't have compress, we can use gzip for this purpose and +dnl should rather than hoping compres will be found at runtime. +if test x"$COMPRESS" = xcompress && test x"$GZIP" != xgzip ; then + UNCOMPRESS="$GZIP -d" +else + UNCOMPRESS="$COMPRESS -d" +fi +AC_SUBST(UNCOMPRESS) + +dnl Search for sendmail, checking the path first and then some common +dnl locations. If --with-sendmail was given, that path overrides. +if test "${with_sendmail+set}" = set ; then + AC_MSG_CHECKING(for sendmail) + AC_MSG_RESULT($SENDMAIL) +else + AC_PATH_PROG(SENDMAIL, sendmail, , "/usr/sbin:/usr/lib") + if test -z "$SENDMAIL" ; then + AC_MSG_ERROR(sendmail not found, re-run with --with-sendmail) + fi +fi + +dnl FIXME: innshellvars* wants to know if we have uustat, send-uucp expects +dnl it to be in the old subst DO/DONT format. Should take a path. +AC_CHECK_PROG(HAVE_UUSTAT, uustat, DO, DONT) +AC_SUBST(HAVE_UUSTAT) + +dnl If we're compiling with Python support, make sure Python is available. +if test x"$DO_PYTHON" = xdefine ; then + INN_ENSURE_PATH_PROG(_PATH_PYTHON, python) +fi + +dnl Search for a particular library, and if found, add that library to the +dnl specified variable (the third argument) and run the commands given in the +dnl fourth argument, if any. This is for libraries we don't want to pollute +dnl LIBS with. +AC_DEFUN([INN_SEARCH_AUX_LIBS], +[inn_save_LIBS=$LIBS +LIBS=${$3} +AC_SEARCH_LIBS($1, $2, + [$3=$LIBS + $4], $5, $6) +LIBS=$inn_save_LIBS +AC_SUBST($3)]) + +dnl Checks for libraries. Use AC_SEARCH_LIBS where possible to avoid +dnl adding libraries when the function is found in libc. In several +dnl cases, we explicitly just add the library to LIBS on success rather +dnl than using default actions so as not to clutter config.h with defines +dnl we never use. + +dnl Check for setproctitle in libc first, then libutil if not found there. +dnl We have a replacement function if we can't find it, and then we also need +dnl to check for pstat. +AC_SEARCH_LIBS(setproctitle, util, + [AC_DEFINE(HAVE_SETPROCTITLE, 1, + [Define if you have the setproctitle function.])], + [LIBOBJS="$LIBOBJS setproctitle.o" + AC_CHECK_FUNCS(pstat)]) + +dnl The rat's nest of networking libraries. The common cases are not to +dnl need any extra libraries, or to need -lsocket -lnsl. We need to avoid +dnl linking with libnsl unless we need it, though, since on some OSes where +dnl it isn't necessary it will totally break networking. Unisys also +dnl includes gethostbyname in libsocket but needs libnsl for socket(). +AC_SEARCH_LIBS(gethostbyname, nsl) +AC_SEARCH_LIBS(socket, socket, , + [AC_CHECK_LIB(nsl, socket, LIBS="$LIBS -lsocket -lnsl", , -lsocket)]) + +dnl Check for inet_aton. We have our own, but on Solaris the version in +dnl libresolv is more lenient in ways that Solaris's internal DNS resolution +dnl code requires, so if we use our own *and* link with libresolv (which may +dnl be forced by Perl) DNS resolution fails. +AC_SEARCH_LIBS(inet_aton, resolv) + +dnl Search for various additional libraries used by portions of INN. +INN_SEARCH_AUX_LIBS(crypt, crypt, CRYPT_LIB) +INN_SEARCH_AUX_LIBS(getspnam, shadow, SHADOW_LIB) + +dnl IRIX has a PAM library with the right symbols but no header files suitable +dnl for use with it, so we have to check the header files first and then only +dnl if one is found do we check for the library. +inn_check_pam=1 +AC_CHECK_HEADERS([pam/pam_appl.h], , + [AC_CHECK_HEADER([security/pam_appl.h], , [inn_check_pam=0])]) +if test x"$inn_check_pam" = x1; then + INN_SEARCH_AUX_LIBS([pam_start], [pam], [PAM_LIB], + [AC_DEFINE([HAVE_PAM], 1, [Define if you have PAM.])]) +fi + +dnl If keyword generation support was requested, check for the appropriate +dnl libraries. +if test x"$inn_enable_keywords" = x1 ; then + INN_SEARCH_AUX_LIBS(regexec, regex, REGEX_LIB, , + [AC_MSG_ERROR(no usable regular expression library found)]) +fi + +dnl Check for whether the user wants to compile with BerkeleyDB, and if so +dnl what the path to the various components of it is. +AC_DEFUN([INN_LIB_BERKELEYDB], +[AC_ARG_WITH(berkeleydb, + [ --with-berkeleydb[=PATH] Enable BerkeleyDB (for ovdb overview method)], + BERKELEY_DB_DIR=$with_berkeleydb, + BERKELEY_DB_DIR=no) +AC_MSG_CHECKING(if BerkeleyDB is desired) +if test x"$BERKELEY_DB_DIR" = xno ; then + AC_MSG_RESULT(no) + BERKELEY_DB_LDFLAGS= + BERKELEY_DB_CFLAGS= + BERKELEY_DB_LIB= +else + AC_MSG_RESULT(yes) + AC_MSG_CHECKING(for BerkeleyDB location) + if test x"$BERKELEY_DB_DIR" = xyes ; then + for v in BerkeleyDB BerkeleyDB.3.0 BerkeleyDB.3.1 BerkeleyDB.3.2 \ + BerkeleyDB.3.3 BerkeleyDB.4.0 BerkeleyDB.4.1 BerkeleyDB.4.2 \ + BerkeleyDB.4.3 BerkeleyDB.4.4 BerkeleyDB.4.5 BerkeleyDB.4.6; do + for d in $prefix /usr/local /opt /usr ; do + if test -d "$d/$v" ; then + BERKELEY_DB_DIR="$d/$v" + break + fi + done + done + fi + if test x"$BERKELEY_DB_DIR" = xyes ; then + for v in db46 db45 db44 db43 db42 db41 db4 db3 db2 ; do + if test -d "/usr/local/include/$v" ; then + BERKELEY_DB_LDFLAGS="-L/usr/local/lib" + BERKELEY_DB_CFLAGS="-I/usr/local/include/$v" + BERKELEY_DB_LIB="-l$v" + AC_MSG_RESULT(FreeBSD locations) + break + fi + done + if test x"$BERKELEY_DB_LIB" = x ; then + for v in db44 db43 db42 db41 db4 db3 db2 ; do + if test -d "/usr/include/$v" ; then + BERKELEY_DB_CFLAGS="-I/usr/include/$v" + BERKELEY_DB_LIB="-l$v" + AC_MSG_RESULT(Linux locations) + break + fi + done + if test x"$BERKELEY_DB_LIB" = x ; then + BERKELEY_DB_LIB=-ldb + AC_MSG_RESULT(trying -ldb) + fi + fi + else + BERKELEY_DB_LDFLAGS="-L$BERKELEY_DB_DIR/lib" + BERKELEY_DB_CFLAGS="-I$BERKELEY_DB_DIR/include" + BERKELEY_DB_LIB="-ldb" + AC_MSG_RESULT($BERKELEY_DB_DIR) + fi + AC_DEFINE(USE_BERKELEY_DB, 1, [Define if BerkeleyDB is available.]) +fi +AC_SUBST(BERKELEY_DB_LDFLAGS) +AC_SUBST(BERKELEY_DB_CFLAGS) +AC_SUBST(BERKELEY_DB_LIB)]) +INN_LIB_BERKELEYDB + +dnl The dbm libraries are a special case. If we're building with BerkeleyDB, +dnl just use the ndbm support provided by it. +if test x"$BERKELEY_DB_LIB" != x ; then + DBM_INC="$BERKELEY_DB_CFLAGS" + DBM_LIB="$BERKELEY_DB_LDFLAGS $BERKELEY_DB_LIB" + AC_SUBST([DBM_LIB]) + AC_DEFINE([HAVE_BDB_DBM], 1, + [Define if the BerkeleyDB dbm compatibility layer is available.]) +else + INN_SEARCH_AUX_LIBS([dbm_open], [ndbm dbm], [DBM_LIB], + [AC_DEFINE([HAVE_DBM], 1, [Define if you have a dbm library.])]) + DBM_INC= +fi +AC_SUBST([DBM_INC]) + +dnl Check for whether the user wants to compile with OpenSSL, and if so what +dnl the path to the various components of it is. +AC_DEFUN([INN_LIB_OPENSSL], +[AC_ARG_WITH(openssl, + [ --with-openssl=PATH Enable OpenSSL (for NNTP over SSL support)], + OPENSSL_DIR=$with_openssl, + OPENSSL_DIR=no) +AC_MSG_CHECKING(if OpenSSL is desired) +if test x"$OPENSSL_DIR" = xno ; then + AC_MSG_RESULT(no) + SSL_BIN= + SSL_INC= + SSL_LIB= +else + AC_MSG_RESULT(yes) + AC_MSG_CHECKING(for OpenSSL location) + if test x"$OPENSSL_DIR" = xyes ; then + for dir in $prefix /usr/local/ssl /usr/lib/ssl /usr/ssl /usr/pkg \ + /usr/local /usr ; do + if test -f "$dir/include/openssl/ssl.h" ; then + OPENSSL_DIR=$dir + break + fi + done + fi + if test x"$OPENSSL_DIR" = xyes ; then + AC_MSG_ERROR(Can not find OpenSSL) + else + AC_MSG_RESULT($OPENSSL_DIR) + SSL_BIN="${OPENSSL_DIR}/bin" + SSL_INC="-I${OPENSSL_DIR}/include" + + # This is mildly tricky. In order to satisfy most linkers, libraries + # have to be listed in the right order, which means that libraries + # with dependencies on other libraries need to be listed first. But + # the -L flag for the OpenSSL library directory needs to go first of + # all. So put the -L flag into LIBS and accumulate actual libraries + # into SSL_LIB, and then at the end, restore LIBS and move -L to the + # beginning of SSL_LIB. + inn_save_LIBS=$LIBS + LIBS="$LIBS -L${OPENSSL_DIR}/lib" + SSL_LIB='' + AC_CHECK_LIB(rsaref, RSAPublicEncrypt, + [AC_CHECK_LIB(RSAglue, RSAPublicEncrypt, + [SSL_LIB="-lRSAglue -lrsaref"], , -lrsaref)]) + AC_CHECK_LIB(crypto, BIO_new, + [AC_CHECK_LIB(dl, DSO_load, + SSL_LIB="-lcrypto -ldl $SSL_LIB", + SSL_LIB="-lcrypto $SSL_LIB", + -lcrypto -ldl $SSL_LIB)], + [AC_MSG_ERROR(Can not find OpenSSL)], + $SSL_LIB) + AC_CHECK_LIB(ssl, SSL_library_init, + [SSL_LIB="-lssl $SSL_LIB"], + [AC_MSG_ERROR(Can not find OpenSSL)], + $SSL_LIB) + SSL_LIB="-L${OPENSSL_DIR}/lib $SSL_LIB" + LIBS=$inn_save_LIBS + AC_DEFINE(HAVE_SSL, 1, [Define if OpenSSL is available.]) + fi +fi +AC_SUBST(SSL_BIN) +AC_SUBST(SSL_INC) +AC_SUBST(SSL_LIB)]) +INN_LIB_OPENSSL + +dnl Check for whether the user wants to compile with SASL, and if so what +dnl the path to the various components of it is. +AC_DEFUN([INN_LIB_SASL], +[AC_ARG_WITH(sasl, + [ --with-sasl=PATH Enable SASL (for imapfeed authentication)], + SASL_DIR=$with_sasl, + SASL_DIR=no) +AC_MSG_CHECKING(if SASL is desired) +if test x"$SASL_DIR" = xno ; then + AC_MSG_RESULT(no) + SASL_INC= + SASL_LIB= +else + AC_MSG_RESULT(yes) + AC_MSG_CHECKING(for SASL location) + if test x"$SASL_DIR" = xyes ; then + for dir in $prefix /usr/local/sasl /usr/sasl /usr/pkg /usr/local ; do + if test -f "$dir/include/sasl/sasl.h" ; then + SASL_DIR=$dir + break + fi + done + fi + if test x"$SASL_DIR" = xyes ; then + if test -f "/usr/include/sasl/sasl.h" ; then + SASL_INC=-I/usr/include/sasl + SASL_DIR=/usr + AC_MSG_RESULT($SASL_DIR) + inn_save_LIBS=$LIBS + AC_CHECK_LIB(sasl2, sasl_getprop, + [SASL_LIB=-lsasl2], [AC_MSG_ERROR(Can not find SASL)]) + LIBS=$inn_save_LIBS + AC_DEFINE(HAVE_SASL, 1, [Define if SASL is available.]) + else + AC_MSG_ERROR(Can not find SASL) + fi + else + AC_MSG_RESULT($SASL_DIR) + SASL_INC="-I${SASL_DIR}/include" + + inn_save_LIBS=$LIBS + LIBS="$LIBS -L${SASL_DIR}/lib" + AC_CHECK_LIB(sasl2, sasl_getprop, + [SASL_LIB="-L${SASL_DIR}/lib -lsasl2"], + [AC_MSG_ERROR(Can not find SASL)],) + LIBS=$inn_save_LIBS + AC_DEFINE(HAVE_SASL, 1, [Define if SASL is available.]) + fi +fi +AC_SUBST(SASL_INC) +AC_SUBST(SASL_LIB)]) +INN_LIB_SASL + +dnl Check for Kerberos libraries for auth_krb5, and if found define KRB5_AUTH +dnl to the relevant object file, which will enable compilation of it. +if test x"$KRB5_INC" != x ; then + INN_SEARCH_AUX_LIBS(krb5_parse_name, krb5, KRB5_LIB, + [KRB5_AUTH="auth_krb5" + KRB5_LIB="$KRB5_LDFLAGS $KRB5_LIB -lk5crypto -lcom_err" + AC_SUBST(KRB5_AUTH) + AC_SUBST(KRB5_INC) + AC_CHECK_HEADERS([et/com_err.h])], , [$LIBS -lk5crypto -lcom_err]) +fi + +dnl Check for necessity of krb5_init_ets +dnl OSX does not require this function +if test x"$KRB5_INC" != x ; then + inn_save_LIBS=$LIBS + LIBS=$KRB5_LIB + AC_CHECK_FUNCS(krb5_init_ets) + LIBS=$inn_save_LIBS +fi + +dnl Libraries and flags for embedded Perl. Some distributions of Linux have +dnl Perl linked with gdbm but don't normally have gdbm installed, so on that +dnl platform only strip -lgdbm out of the Perl libraries. Leave it in on +dnl other platforms where it may be necessary (it isn't on Linux; Linux +dnl shared libraries can manage their own dependencies). Strip -lc out, which +dnl is added on some platforms, is unnecessary, and breaks compiles with +dnl -pthread (which may be added by Python). +dnl +dnl If we aren't compiling with large-file support, strip out the large file +dnl flags from inn_perl_core_flags; otherwise, innd/cc.c and lib/qio.c +dnl disagree over the size of an off_t. Since none of our calls into Perl +dnl use variables of type off_t, this should be harmless; in any event, it's +dnl going to be better than the innd/cc.c breakage. +if test x"$DO_PERL" = xDO ; then + AC_MSG_CHECKING(for Perl linkage) + inn_perl_core_path=`$_PATH_PERL -MConfig -e 'print $Config{archlibexp}'` + inn_perl_core_flags=`$_PATH_PERL -MExtUtils::Embed -e ccopts` + inn_perl_core_libs=`$_PATH_PERL -MExtUtils::Embed -e ldopts 2>&1 | tail -1` + inn_perl_core_libs=" $inn_perl_core_libs " + inn_perl_core_libs=`echo "$inn_perl_core_libs" | sed 's/ -lc / /'` + for i in $LIBS ; do + inn_perl_core_libs=`echo "$inn_perl_core_libs" | sed "s/ $i / /"` + done + case $host in + *-linux*) + inn_perl_core_libs=`echo "$inn_perl_core_libs" | sed 's/ -lgdbm / /'` + ;; + *-cygwin*) + inn_perl_libname=`$_PATH_PERL -MConfig -e 'print $Config{libperl}'` + inn_perl_libname=`echo "$inn_perl_libname" | sed 's/^lib//; s/\.a$//'` + inn_perl_core_libs="${inn_perl_core_libs}-l$inn_perl_libname" + ;; + esac + inn_perl_core_libs=`echo "$inn_perl_core_libs" | sed 's/^ *//'` + inn_perl_core_libs=`echo "$inn_perl_core_libs" | sed 's/ *$//'` + inn_perl_core_flags=" $inn_perl_core_flags " + if test x"$inn_enable_largefiles" != xyes ; then + for f in -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGE_FILES ; do + inn_perl_core_flags=`echo "$inn_perl_core_flags" | sed "s/ $f / /"` + done + fi + inn_perl_core_flags=`echo "$inn_perl_core_flags" | sed 's/^ *//'` + inn_perl_core_flags=`echo "$inn_perl_core_flags" | sed 's/ *$//'` + PERL_INC="$inn_perl_core_flags" + PERL_LIB="$inn_perl_core_libs" + AC_MSG_RESULT($inn_perl_core_path) +else + PERL_INC='' + PERL_LIB='' +fi +AC_SUBST(PERL_INC) +AC_SUBST(PERL_LIB) + +dnl Libraries and flags for embedded Python. +dnl +dnl FIXME: I wish there was a less icky way to get this. +if test x"$DO_PYTHON" = xdefine ; then + AC_MSG_CHECKING(for Python linkage) + py_prefix=`$_PATH_PYTHON -c 'import sys; print sys.prefix'` + py_ver=`$_PATH_PYTHON -c 'import sys; print sys.version[[:3]]'` + py_libdir="${py_prefix}/lib/python${py_ver}" + PYTHON_INC="-I${py_prefix}/include/python${py_ver}" + py_linkage="" + for py_linkpart in LIBS LIBC LIBM LOCALMODLIBS BASEMODLIBS \ + LINKFORSHARED LDFLAGS ; do + py_linkage="$py_linkage "`grep "^${py_linkpart}=" \ + $py_libdir/config/Makefile \ + | sed -e 's/^.*=//'` + done + PYTHON_LIB="-L$py_libdir/config -lpython$py_ver $py_linkage" + PYTHON_LIB=`echo $PYTHON_LIB | sed -e 's/[ \\t]*/ /g'` + AC_MSG_RESULT($py_libdir) +else + PYTHON_LIB="" + PYTHON_INC="" +fi +AC_SUBST(PYTHON_LIB) +AC_SUBST(PYTHON_INC) + +dnl If configuring with large file support, determine the right flags to +dnl use based on the platform. This is the wrong approach; autoconf 2.50 +dnl comes with a macro that takes the right approach. But this works well +dnl enough until we switch to autoconf 2.50 or later. +if test x"$inn_enable_largefiles" = xyes ; then + AC_MSG_CHECKING(for largefile linkage) + case "$host" in + *-aix4.[01]*) + AC_MSG_RESULT(no) + AC_MSG_ERROR([AIX before 4.2 does not support large files]) + ;; + *-aix4*) + AC_MSG_RESULT(ok) + LFS_CFLAGS="-D_LARGE_FILES" + LFS_LDFLAGS="" + LFS_LIBS="" + ;; + *-hpux*) + AC_MSG_RESULT(ok) + LFS_CFLAGS="-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64" + LFS_LDFLAGS="" + LFS_LIBS="" + ;; + *-irix*) + AC_MSG_RESULT(no) + AC_MSG_ERROR([Large files not supported on this platform]) + ;; + *-linux*) + AC_MSG_RESULT(maybe) + LFS_CFLAGS="-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64" + LFS_LDFLAGS="" + LFS_LIBS="" + AC_DEFINE([_GNU_SOURCE], 1, + [Some versions of glibc need this defined for pread/pwrite.]) + ;; + *-solaris*) + AC_MSG_RESULT(ok) + AC_PATH_PROG(GETCONF, getconf) + if test -z "$GETCONF" ; then + AC_MSG_ERROR([getconf required to configure large file support]) + fi + LFS_CFLAGS=`$GETCONF LFS_CFLAGS` + LFS_LDFLAGS=`$GETCONF LFS_LDFLAGS` + LFS_LIBS=`$GETCONF LFS_LIBS` + ;; + *) + AC_MSG_RESULT(maybe) + LFS_CFLAGS="-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64" + LFS_LDFLAGS="" + LFS_LIBS="" + ;; + esac + AC_SUBST(LFS_CFLAGS) + AC_SUBST(LFS_LDFLAGS) + AC_SUBST(LFS_LIBS) +fi + +dnl Start by checking for standard C headers. AC_HEADER_STDC will set +dnl STDC_HEADERS if stdlib.h, stdarg.h, string.h, and float.h all exist, if +dnl memchr (and probably the other mem functions) is in string.h, if free (and +dnl probably malloc and friends) are in stdlib.h, and if ctype.h will work on +dnl high-bit characters. +AC_HEADER_STDC + +dnl Only if that wasn't set do we need to go hunting for other headers to +dnl include on non-ANSI systems and check for functions that all ANSI C +dnl systems should have. +if test x"$ac_cv_header_stdc" = xno ; then + AC_CHECK_HEADERS(memory.h stdlib.h strings.h) + AC_CHECK_FUNCS(memcpy strchr) +fi + +dnl Special checks for header files. +AC_HEADER_DIRENT +AC_HEADER_TIME +AC_HEADER_SYS_WAIT + +dnl Generic checks for header files. +AC_CHECK_HEADERS(crypt.h inttypes.h limits.h ndbm.h pam/pam_appl.h stdbool.h \ + stddef.h stdint.h string.h sys/bitypes.h sys/filio.h \ + sys/loadavg.h sys/param.h sys/select.h sys/sysinfo.h \ + sys/time.h unistd.h) + +dnl Some Linux systems have db1/ndbm.h instead of ndbm.h. Others have +dnl gdbm-ndbm.h. +if test x"$ac_cv_header_ndbm_h" = xno ; then + AC_CHECK_HEADERS(db1/ndbm.h gdbm-ndbm.h) +fi + +dnl Check to see if herrno is declared. +AC_DEFUN([INN_NEED_HERRNO_DECLARATION], +[AC_CACHE_CHECK([whether h_errno must be declared], inn_cv_herrno_need_decl, +[AC_TRY_COMPILE([#include ], [h_errno = 0;], + inn_cv_herrno_need_decl=no, + inn_cv_herrno_need_decl=yes)]) +if test "$inn_cv_herrno_need_decl" = yes ; then + AC_DEFINE([NEED_HERRNO_DECLARATION], 1, + [Define if does not declare h_errno.]) +fi]) +INN_NEED_HERRNO_DECLARATION + +dnl The set of standard includes, used for checking if functions need to be +dnl declared and for tests that need to use standard functions. +define([_INN_HEADER_SOURCE], +[#include +#include +#if STDC_HEADERS +# include +# include +#else +# if HAVE_STDLIB_H +# include +# endif +# if !HAVE_STRCHR +# define strchr index +# define strrchr rindex +# endif +#endif +#if HAVE_STRING_H +# if !STDC_HEADERS && HAVE_MEMORY_H +# include +# endif +# include +#else +# if HAVE_STRINGS_H +# include +# endif +#endif +#if HAVE_UNISTD_H +# include +#endif]) + +dnl See if a given function needs a declaration by seeing if we can access a +dnl function pointer for that function. This is done in a really ugly way +dnl with hacks so that autoheader will pick up the defines properly; it's a +dnl stop-gap solution until switching to autoconf 2.50. +AC_DEFUN([INN_NEED_DECLARATION], +[AC_MSG_CHECKING([whether $1 must be declared]) +AC_CACHE_VAL([inn_cv_decl_needed_$1], +[AC_TRY_COMPILE( +_INN_HEADER_SOURCE() +[$3], +[char *(*pfn) = (char *(*)) $1], +[inn_cv_decl_needed_$1=no], [inn_cv_decl_needed_$1=yes])]) +if test $inn_cv_decl_needed_$1 = yes ; then + AC_MSG_RESULT(yes) + AC_DEFINE($2, 1, [Define if $1 isn't declared in the system headers.]) +else + AC_MSG_RESULT(no) +fi]) +INN_NEED_DECLARATION(inet_aton, [NEED_DECLARATION_INET_ATON], +[#include +#include ]) +INN_NEED_DECLARATION(inet_ntoa, [NEED_DECLARATION_INET_NTOA], +[#include +#include ]) +INN_NEED_DECLARATION(snprintf, [NEED_DECLARATION_SNPRINTF]) +INN_NEED_DECLARATION(vsnprintf, [NEED_DECLARATION_VSNPRINTF]) + +dnl Checks for typedefs, structures, and compiler characteristics. +AC_C_BIGENDIAN +AC_C_CONST +AC_STRUCT_ST_BLKSIZE +AC_STRUCT_TM +AC_TYPE_SIZE_T +AC_TYPE_UID_T +AC_TYPE_OFF_T +AC_TYPE_PID_T +AC_CHECK_TYPE(ptrdiff_t, long) +AC_CHECK_TYPE(ssize_t, int) + +dnl Check for ISO C99 variadic macro support in the compiler. +AC_DEFUN([INN_C_C99_VAMACROS], +[AC_CACHE_CHECK(for C99 variadic macros, inn_cv_c_c99_vamacros, +[AC_TRY_COMPILE( +[#include +#define error(...) fprintf(stderr, __VA_ARGS__)], +[error("foo"); error("foo %d", 0); return 0;], +[inn_cv_c_c99_vamacros=yes], [inn_cv_c_c99_vamacros=no])]) +if test $inn_cv_c_c99_vamacros = yes ; then + AC_DEFINE(HAVE_C99_VAMACROS, 1, + [Define if the compiler supports C99 variadic macros.]) +fi]) +INN_C_C99_VAMACROS + +dnl Check for GNU-style variadic macro support in the compiler. +AC_DEFUN([INN_C_GNU_VAMACROS], +[AC_CACHE_CHECK(for GNU-style variadic macros, inn_cv_c_gnu_vamacros, +[AC_TRY_COMPILE( +[#include +#define error(args...) fprintf(stderr, args)], +[error("foo"); error("foo %d", 0); return 0;], +[inn_cv_c_gnu_vamacros=yes], [inn_cv_c_gnu_vamacros=no])]) +if test $inn_cv_c_gnu_vamacros = yes ; then + AC_DEFINE(HAVE_GNU_VAMACROS, 1, + [Define if the compiler supports GNU-style variadic macros.]) +fi]) +INN_C_GNU_VAMACROS + +dnl Check for long long int, and define HAVE_LONG_LONG if the compiler +dnl supports it. +AC_DEFUN([INN_C_LONG_LONG], +[AC_CACHE_CHECK(for long long int, inn_cv_c_long_long, +[AC_TRY_COMPILE(, [long long int i;], + inn_cv_c_long_long=yes, + inn_cv_c_long_long=no)]) +if test $inn_cv_c_long_long = yes ; then + AC_DEFINE(HAVE_LONG_LONG, 1, + [Define if the compiler supports long long int.]) +fi]) +INN_C_LONG_LONG + +dnl From Paul D. Smith on the autoconf mailing list, +dnl this is a version of AC_CHECK_TYPE that allows specification of additional +dnl headers. It's a modified version of the standard autoconf macro. +AC_DEFUN([INN_CHECK_TYPE], +[AC_REQUIRE([AC_HEADER_STDC]) +AC_MSG_CHECKING(for $1) +AC_CACHE_VAL(ac_cv_type_$1, +[AC_EGREP_CPP(dnl +changequote(<<, >>)dnl +<<(^|[^a-zA-Z_0-9])$1[^a-zA-Z_0-9]>>dnl +changequote([, ]), +[#include +#ifdef STDC_HEADERS +# include +# include +#endif +$3], + ac_cv_type_$1=yes, + ac_cv_type_$1=no +)]) +AC_MSG_RESULT($ac_cv_type_$1) +if test x"$ac_cv_type_$1" = xno ; then + AC_DEFINE_UNQUOTED($1, $2) +fi]) + +INN_CHECK_TYPE(sig_atomic_t, int, [#include ]) +INN_CHECK_TYPE(socklen_t, int, [#include ]) + +dnl Source used by INN_MACRO_IOV_MAX. +define([_INN_MACRO_IOV_MAX_SOURCE], +[[#include +#include +#include +#include +#include +#ifdef HAVE_UNISTD_H +# include +#endif +#ifdef HAVE_LIMITS_H +# include +#endif + +int +main () +{ + int fd, size; + struct iovec array[1024]; + char data; + + FILE *f = fopen ("conftestval", "w"); + if (!f) return 1; +#ifdef IOV_MAX + fprintf (f, "set in limits.h\n"); +#else +# ifdef UIO_MAXIOV + fprintf (f, "%d\n", UIO_MAXIOV); +# else + fd = open ("/dev/null", O_WRONLY, 0666); + if (fd < 0) return 1; + for (size = 1; size <= 1024; size++) + { + array[size - 1].iov_base = &data; + array[size - 1].iov_len = sizeof data; + if (writev (fd, array, size) < 0) + { + if (errno != EINVAL) return 1; + fprintf(f, "%d\n", size - 1); + exit (0); + } + } + fprintf (f, "1024\n"); +# endif /* UIO_MAXIOV */ +#endif /* IOV_MAX */ + return 0; +}]]) + +dnl Check for the number of elements in an iovec (IOV_MAX). SVr4 systems +dnl appear to use that name for this limit (checked Solaris 2.6, IRIX 6.5, and +dnl HP-UX 11.00). Linux doesn't have it, but instead has UIO_MAXIOV defined +dnl in included from . The platforms that have IOV_MAX +dnl appear to also offer it via sysconf(3), but it should be a constant for a +dnl given implementation. Set IOV_MAX if it's not defined in or +dnl . +AC_DEFUN([INN_MACRO_IOV_MAX], +[AC_CACHE_CHECK([value of IOV_MAX], [inn_cv_macro_iov_max], +[AC_TRY_RUN(_INN_MACRO_IOV_MAX_SOURCE(), + inn_cv_macro_iov_max=`cat conftestval`, + inn_cv_macro_iov_max=error, 16) +if test x"$inn_cv_macro_iov_max" = xerror ; then + AC_MSG_WARN([probe failure, assuming 16]) + inn_cv_macro_iov_max=16 +fi]) +if test x"$inn_cv_macro_iov_max" != x"set in limits.h" ; then + AC_DEFINE_UNQUOTED(IOV_MAX, $inn_cv_macro_iov_max, + [Define to the max vectors in an iovec.]) +fi]) +INN_MACRO_IOV_MAX + +dnl Check for SUN_LEN (size of a Unix domain socket struct, macro required +dnl POSIX.1g but not that widespread yet). +AC_DEFUN([INN_MACRO_SUN_LEN], +[AC_CACHE_CHECK(for SUN_LEN, inn_cv_macro_sun_len, +[AC_TRY_LINK( +[#include +#include ], +[struct sockaddr_un sun; +int i; + +i = SUN_LEN(&sun);], + inn_cv_macro_sun_len=yes, + inn_cv_macro_sun_len=no)]) +if test x"$inn_cv_macro_sun_len" = xyes ; then + AC_DEFINE(HAVE_SUN_LEN, 1, + [Define if defines the SUN_LEN macro.]) +fi]) +INN_MACRO_SUN_LEN + +dnl BSD hosts have a tm_gmtoff element in struct tm containing the offset from +dnl GMT/UTC for that time. This is the strongly preferred way of getting time +dnl zone information. +AC_DEFUN([INN_STRUCT_TM_GMTOFF], +[AC_CACHE_CHECK(for tm_gmtoff in struct tm, inn_cv_struct_tm_gmtoff, +[AC_TRY_LINK([#include ], + [struct tm t; t.tm_gmtoff = 3600], + inn_cv_struct_tm_gmtoff=yes, + inn_cv_struct_tm_gmtoff=no)]) +if test x"$inn_cv_struct_tm_gmtoff" = xyes ; then + AC_DEFINE([HAVE_TM_GMTOFF], 1, + [Define if your struct tm has a tm_gmtoff member.]) +fi]) +INN_STRUCT_TM_GMTOFF + +dnl BSD hosts have the name of the local time zone in struct tm, which is much +dnl nicer to use than the tzname variable (and also potentially handles +dnl renamings of the time zone in the past). +AC_DEFUN([INN_STRUCT_TM_ZONE], +[AC_CACHE_CHECK(for tm_zone in struct tm, inn_cv_struct_tm_zone, +[AC_TRY_LINK([#include ], + [struct tm t; t.tm_zone = "UTC"], + inn_cv_struct_tm_zone=yes, + inn_cv_struct_tm_zone=no)]) +if test x"$inn_cv_struct_tm_zone" = xyes ; then + AC_DEFINE([HAVE_TM_ZONE], 1, + [Define if your struct tm has a tm_zone member.]) +fi]) +INN_STRUCT_TM_ZONE + +dnl Many System V hosts have an external variable timezone containing the +dnl offset of local time from GMT/UTC. We can use this for the timezone +dnl offset for current time, although it's not usable for anything else. +dnl Unfortunately, some BSD varients have a function named timezone instead. +dnl HP-UX has timezone but doesn't have altzone, which isn't good enough. +AC_DEFUN([INN_VAR_TIMEZONE], +[AC_CACHE_CHECK(for timezone variable, inn_cv_var_timezone, +[AC_TRY_LINK([#include ], [timezone = 3600; altzone = 7200], + inn_cv_var_timezone=yes, + inn_cv_var_timezone=no)]) +if test x"$inn_cv_var_timezone" = xyes ; then + AC_DEFINE([HAVE_VAR_TIMEZONE], 1, + [Define if timezone is an external variable in .]) +fi]) +INN_VAR_TIMEZONE + +dnl Many System V hosts and some BSD systems have an external variable tzname +dnl containing the abbreviations of the main and alternate time zone. We can +dnl use these as a reasonable approximation of the correct time zone names, +dnl although they could be incorrect if the time zone name has changed in the +dnl past. +AC_DEFUN([INN_VAR_TZNAME], +[AC_CACHE_CHECK(for tzname variable, inn_cv_var_tzname, +[AC_TRY_LINK([#include ], [*tzname = "UTC"], + inn_cv_var_tzname=yes, + inn_cv_var_tzname=no)]) +if test x"$inn_cv_var_tzname" = xyes ; then + AC_DEFINE([HAVE_VAR_TZNAME], 1, + [Define if tzname is an external variable in .]) +fi]) +INN_VAR_TZNAME + +dnl A modified version of AC_CHECK_SIZEOF that doesn't always AC_DEFINE, but +dnl instead lets you execute shell code based on success or failure. This is +dnl to avoid config.h clutter. +AC_DEFUN(INN_IF_SIZEOF, +[changequote(<<, >>)dnl +dnl The name to #define. +define(<>, translit(sizeof_$1, [a-z *], [A-Z_P]))dnl +dnl The cache variable name. +define(<>, translit(ac_cv_sizeof_$1, [ *], [_p]))dnl +changequote([, ])dnl +AC_MSG_CHECKING(size of $1) +AC_CACHE_VAL(AC_CV_NAME, +[AC_TRY_RUN([#include +main() +{ + FILE *f = fopen("conftestval", "w"); + if (!f) exit(1); + fprintf(f, "%d\n", sizeof($1)); + exit(0); +}], AC_CV_NAME=`cat conftestval`, AC_CV_NAME=0, +ifelse([$2], , , AC_CV_NAME=$2)) +])dnl +AC_MSG_RESULT($AC_CV_NAME) +if test x"$AC_CV_NAME" = x"$3" ; then + ifelse([$4], , :, [$4]) +else + ifelse([$5], , :, [$5]) +fi +undefine([AC_TYPE_NAME])dnl +undefine([AC_CV_NAME])dnl +]) + +dnl Find a 32 bit type, by trying likely candidates. First, check for the C9X +dnl int32_t, then look for something else with a size of four bytes. +INN_IF_SIZEOF(int, 4, 4, INN_INT32=int, + [INN_IF_SIZEOF(long, 4, 4, INN_INT32=long, + [INN_IF_SIZEOF(short, 2, 4, INN_INT32=short)])]) +INN_CHECK_TYPE(int32_t, $INN_INT32, +[#ifdef HAVE_STDINT_H +# include +#endif +#ifdef HAVE_SYS_BITYPES_H +# include +#endif +]) + +dnl Figure out the unsigned version. +INN_CHECK_TYPE(uint32_t, unsigned $INN_INT32, +[#ifdef HAVE_STDINT_H +# include +#endif +#ifdef HAVE_SYS_BITYPES_H +# include +#endif +]) + +dnl Checks for library functions. +AC_FUNC_MEMCMP +AC_TYPE_SIGNAL + +dnl Source used by INN_FUNC_INET_NTOA +define([_INN_FUNC_INET_NTOA_SOURCE], +[#include +#include +#include +#include +#if STDC_HEADERS || HAVE_STRING_H +# include +#endif + +int +main () +{ + struct in_addr in; + in.s_addr = htonl (0x7f000000L); + return (!strcmp (inet_ntoa (in), "127.0.0.0") ? 0 : 1); +}]) + +dnl Check whether inet_ntoa is present and working. Since calling inet_ntoa +dnl involves passing small structs on the stack, present and working versions +dnl may still not function with gcc on some platforms (such as IRIX). +AC_DEFUN([INN_FUNC_INET_NTOA], +[AC_CACHE_CHECK(for working inet_ntoa, inn_cv_func_inet_ntoa_works, +[AC_TRY_RUN(_INN_FUNC_INET_NTOA_SOURCE(), + [inn_cv_func_inet_ntoa_works=yes], + [inn_cv_func_inet_ntoa_works=no], + [inn_cv_func_inet_ntoa_works=no])]) +if test "$inn_cv_func_inet_ntoa_works" = yes ; then + AC_DEFINE([HAVE_INET_NTOA], 1, + [Define if your system has a working inet_ntoa function.]) +else + LIBOBJS="$LIBOBJS inet_ntoa.${ac_objext}" +fi]) +INN_FUNC_INET_NTOA + +dnl Check whether sockaddr structs have sa_len fields +AC_DEFUN([INN_SOCKADDR_SA_LEN], +[AC_CACHE_CHECK(whether struct sockaddr has sa_len, + inn_cv_struct_sockaddr_sa_len, + [AC_TRY_COMPILE( + [#include + #include + #include ], + [struct sockaddr sa; int x = sa.sa_len;], + [inn_cv_struct_sockaddr_sa_len=yes], + [inn_cv_struct_sockaddr_sa_len=no])]) +if test "$inn_cv_struct_sockaddr_sa_len" = yes ; then + AC_DEFINE([HAVE_SOCKADDR_LEN],1, + [Define if your system has a sa_len field in struct sockaddr]) +fi]) +INN_SOCKADDR_SA_LEN + +dnl Check whether we have an SA_LEN macro available to us +AC_DEFUN([INN_SA_LEN_MACRO], +[AC_CACHE_CHECK(for SA_LEN(s) macro, inn_cv_sa_len_macro, + [AC_TRY_LINK( + [#include + #include + #include ], + [struct sockaddr sa; int x = SA_LEN(&sa);], + [inn_cv_sa_len_macro=yes], + [inn_cv_sa_len_macro=no])]) +if test "$inn_cv_sa_len_macro" = yes ; then + AC_DEFINE([HAVE_SA_LEN_MACRO],1, + [Define if your system has a SA_LEN(s) macro]) +fi]) +INN_SA_LEN_MACRO + +dnl Check to see how struct sockaddr_storage members are named. +dnl *** Called from INN_SOCKADDR_STORAGE +AC_DEFUN([INN_2553_SS_FAMILY], +[AC_CACHE_CHECK(for RFC 2553 style sockaddr_storage member names, + inn_cv_2553_ss_family, + [AC_TRY_COMPILE( + [#include + #include + #include ], + [struct sockaddr_storage ss; int x=ss.ss_family;], + [inn_cv_2553_ss_family=no], + [inn_cv_2553_ss_family=yes])]) +if test "$inn_cv_2553_ss_family" = yes ; then + AC_DEFINE([HAVE_2553_STYLE_SS_FAMILY],1, + [Define if your system has sockaddr_storage.__ss_family]) +fi]) + +dnl Check whether we have struct sockaddr_storage as defined by RFC 2553, +dnl or whether we should define it ourselves. +AC_DEFUN([INN_SOCKADDR_STORAGE], +[AC_CACHE_CHECK(for struct sockaddr_storage, inn_cv_struct_sockaddr_storage, + [AC_TRY_COMPILE( + [#include + #include + #include ], + [struct sockaddr_storage ss;], + [inn_cv_struct_sockaddr_storage=yes], + [inn_cv_struct_sockaddr_storage=no])]) +if test "$inn_cv_struct_sockaddr_storage" = yes ; then + AC_DEFINE([HAVE_SOCKADDR_STORAGE],1, + [Define if your system has struct sockaddr_storage]) + INN_2553_SS_FAMILY +fi]) +INN_SOCKADDR_STORAGE + +dnl Source used by INN_IN6_EQ_BROKEN +dnl Test borrowed from a bug report by tmoestl@gmx.net for glibc +define([_INN_IN6_EQ_BROKEN_SOURCE], +[#include +#include +#include +#include + +int +main () +{ + struct in6_addr a; + struct in6_addr b; + + inet_pton(AF_INET6,"fe80::1234:5678:abcd",&a); + inet_pton(AF_INET6,"fe80::1234:5678:abcd",&b); + return IN6_ARE_ADDR_EQUAL(&a,&b) ? 0 : 1; +}]) + +dnl Checks whether IN6_ARE_ADDR_EQUAL macro is broken (glibc 2.1.3 is) +dnl *** only run if we're building for IPv6 (--enable-ipv6) +AC_DEFUN([INN_IN6_EQ_BROKEN], +[AC_CACHE_CHECK(whether IN6_ARE_ADDR_EQUAL macro is broken, + inn_cv_in6_are_addr_equal_broken, + [AC_TRY_RUN(_INN_IN6_EQ_BROKEN_SOURCE, + inn_cv_in6_are_addr_equal_broken=no, + inn_cv_in6_are_addr_equal_broken=yes, + inn_cv_in6_are_addr_equal_broken=no)]) +if test "$inn_cv_in6_are_addr_equal_broken" = yes ; then + AC_DEFINE([HAVE_BROKEN_IN6_ARE_ADDR_EQUAL],1, + [Define if your IN6_ARE_ADDR_EQUAL macro is broken]) +fi]) +if test "$inn_enable_ipv6_tests" = yes ; then + INN_IN6_EQ_BROKEN +fi + +dnl Source used by INN_FUNC_SNPRINTF. +define([_INN_FUNC_SNPRINTF_SOURCE], +[[#include +#include + +char buf[2]; + +int +test (char *format, ...) +{ + va_list args; + int count; + + va_start (args, format); + count = vsnprintf (buf, sizeof buf, format, args); + va_end (args); + return count; +} + +int +main () +{ + return ((test ("%s", "abcd") == 4 && buf[0] == 'a' && buf[1] == '\0' + && snprintf(NULL, 0, "%s", "abcd") == 4) ? 0 : 1); +}]]) + +dnl Check for a working snprintf. Some systems have snprintf, but it doesn't +dnl null-terminate if the buffer isn't large enough or it returns -1 if the +dnl string doesn't fit instead of returning the number of characters that +dnl would have been formatted. +AC_DEFUN([INN_FUNC_SNPRINTF], +[AC_CACHE_CHECK(for working snprintf, inn_cv_func_snprintf_works, +[AC_TRY_RUN(_INN_FUNC_SNPRINTF_SOURCE(), + [inn_cv_func_snprintf_works=yes], + [inn_cv_func_snprintf_works=no], + [inn_cv_func_snprintf_works=no])]) +if test "$inn_cv_func_snprintf_works" = yes ; then + AC_DEFINE([HAVE_SNPRINTF], 1, + [Define if your system has a working snprintf function.]) +else + LIBOBJS="$LIBOBJS snprintf.${ac_objext}" +fi]) +INN_FUNC_SNPRINTF + +dnl Check for various other functions. +AC_CHECK_FUNCS(atexit getloadavg getrlimit getrusage getspnam setbuffer \ + sigaction setgroups setrlimit setsid socketpair statvfs \ + strncasecmp strtoul symlink sysconf) + +dnl Find a way to get the file descriptor limit. +if test x"$ac_cv_func_getrlimit" = xno ; then + AC_CHECK_FUNCS(getdtablesize ulimit, break) +fi + +dnl If we don't have statvfs, gather some more information for inndf. +if test x"$ac_cv_func_statvfs" = xno ; then + AC_CHECK_FUNCS(statfs) + AC_CHECK_HEADERS(sys/vfs.h sys/mount.h) +fi + +dnl If we can't find any of the following, we have replacements for them. +AC_REPLACE_FUNCS(fseeko ftello getpagesize hstrerror inet_aton mkstemp \ + pread pwrite seteuid strcasecmp strerror strlcat strlcpy \ + strspn setenv) + +dnl Source used by INN_TYPE_FPOS_T_LARGE. +define([_INN_TYPE_FPOS_T_LARGE_SOURCE], +[#include +#include + +int +main () +{ + fpos_t fpos = 9223372036854775807ULL; + off_t off; + off = fpos; + exit(off == (off_t) 9223372036854775807ULL ? 0 : 1); +}]) + +dnl Check whether fpos_t is 64 bits and can be assigned to an off_t. If so, +dnl sets HAVE_LARGE_FPOS_T; this means that a missing fseeko or ftello can be +dnl emulated usint fgetpos and fsetpos. +AC_DEFUN([INN_TYPE_FPOS_T_LARGE], +[AC_CACHE_CHECK(for off_t-compatible fpos_t, inn_cv_type_fpos_t_large, +[AC_TRY_RUN(_INN_TYPE_FPOS_T_LARGE_SOURCE(), + [inn_cv_type_fpos_t_large=yes], + [inn_cv_type_fpos_t_large=no], + [inn_cv_type_fpos_t_large=no]) +if test "$inn_cv_type_fpos_t_large" = yes ; then + AC_DEFINE([HAVE_LARGE_FPOS_T], 1, + [Define if fpos_t is at least 64 bits and compatible with off_t.]) +fi])]) + +dnl If replacing fseeko or ftello, see if we can use fsetpos/fgetpos. +if test "$ac_cv_func_fseeko" = no || test "$ac_cv_func_ftello" = no ; then + INN_TYPE_FPOS_T_LARGE +fi + +dnl Source used by INN_FUNC_MMAP. +define([_INN_FUNC_MMAP_SOURCE], +[_INN_HEADER_SOURCE()] +[[#include +#include + +int +main() +{ + int *data, *data2; + int i, fd; + + /* First, make a file with some known garbage in it. Use something + larger than one page but still an odd page size. */ + data = malloc (20000); + if (!data) return 1; + for (i = 0; i < 20000 / sizeof (int); i++) + data[i] = rand(); + umask (0); + fd = creat ("conftestmmaps", 0600); + if (fd < 0) return 1; + if (write (fd, data, 20000) != 20000) return 1; + close (fd); + + /* Next, try to mmap the file and make sure we see the same garbage. */ + fd = open ("conftestmmaps", O_RDWR); + if (fd < 0) return 1; + data2 = mmap (0, 20000, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + if (data2 == (int *) -1) return 1; + for (i = 0; i < 20000 / sizeof (int); i++) + if (data[i] != data2[i]) + return 1; + + close (fd); + unlink ("conftestmmaps"); + return 0; +}]]) + + +dnl This portion is similar to what AC_FUNC_MMAP does, only it tests shared, +dnl non-fixed mmaps. +AC_DEFUN([INN_FUNC_MMAP], +[AC_CACHE_CHECK(for working mmap, inn_cv_func_mmap, +[AC_TRY_RUN(_INN_FUNC_MMAP_SOURCE(), + inn_cv_func_mmap=yes, + inn_cv_func_mmap=no, + inn_cv_func_mmap=no)]) +if test $inn_cv_func_mmap = yes ; then + AC_DEFINE(HAVE_MMAP) +fi]) + +dnl Source used by INN_FUNC_MMAP_NEEDS_MSYNC. +define([_INN_FUNC_MMAP_NEEDS_MSYNC_SOURCE], +[_INN_HEADER_SOURCE()] +[[#include +#include +#include + +int +main() +{ + int *data, *data2; + int i, fd; + + /* First, make a file with some known garbage in it. Use something + larger than one page but still an odd page size. */ + data = malloc (20000); + if (!data) return 1; + for (i = 0; i < 20000 / sizeof (int); i++) + data[i] = rand(); + umask (0); + fd = creat ("conftestmmaps", 0600); + if (fd < 0) return 1; + if (write (fd, data, 20000) != 20000) return 1; + close (fd); + + /* Next, try to mmap the file and make sure we see the same garbage. */ + fd = open ("conftestmmaps", O_RDWR); + if (fd < 0) return 1; + data2 = mmap (0, 20000, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + if (data2 == (int *) -1) return 1; + + /* Finally, see if changes made to the mmaped region propagate back to + the file as seen by read (meaning that msync isn't needed). */ + for (i = 0; i < 20000 / sizeof (int); i++) + data2[i]++; + if (read (fd, data, 20000) != 20000) return 1; + for (i = 0; i < 20000 / sizeof (int); i++) + if (data[i] != data2[i]) + return 1; + + close (fd); + unlink ("conftestmmapm"); + return 0; +}]]) + +dnl Check whether the data read from an open file sees the changes made to an +dnl mmaped region, or if msync has to be called for other applications to see +dnl those changes. +AC_DEFUN([INN_FUNC_MMAP_NEEDS_MSYNC], +[AC_CACHE_CHECK(whether msync is needed, inn_cv_func_mmap_need_msync, +[AC_TRY_RUN(_INN_FUNC_MMAP_NEEDS_MSYNC_SOURCE(), + inn_cv_func_mmap_need_msync=no, + inn_cv_func_mmap_need_msync=yes, + inn_cv_func_mmap_need_msync=yes)]) +if test $inn_cv_func_mmap_need_msync = yes ; then + AC_DEFINE(MMAP_NEEDS_MSYNC, 1, + [Define if you need to call msync for calls to read to see changes.]) +fi]) + +dnl Source used by INN_FUNC_MMAP_SEES_WRITES. +define([_INN_FUNC_MMAP_SEES_WRITES_SOURCE], +[[#include +#include +#include +#include +#if HAVE_UNISTD_H +# include +#endif +#include + +/* Fractional page is probably worst case. */ +static char zbuff[1024]; +static char fname[] = "conftestw"; + +int +main () +{ + char *map; + int i, fd; + + fd = open (fname, O_RDWR | O_CREAT, 0660); + if (fd < 0) return 1; + unlink (fname); + write (fd, zbuff, sizeof (zbuff)); + lseek (fd, 0, SEEK_SET); + map = mmap (0, sizeof (zbuff), PROT_READ, MAP_SHARED, fd, 0); + if (map == (char *) -1) return 2; + for (i = 0; fname[i]; i++) + { + if (write (fd, &fname[i], 1) != 1) return 3; + if (map[i] != fname[i]) return 4; + } + return 0; +}]]) + +dnl Check if an mmaped region will see writes made to the underlying file +dnl without an intervening msync. +AC_DEFUN([INN_FUNC_MMAP_SEES_WRITES], +[AC_CACHE_CHECK(whether mmap sees writes, inn_cv_func_mmap_sees_writes, +[AC_TRY_RUN(_INN_FUNC_MMAP_SEES_WRITES_SOURCE(), + inn_cv_func_mmap_sees_writes=yes, + inn_cv_func_mmap_sees_writes=no, + inn_cv_func_mmap_sees_writes=no)]) +if test $inn_cv_func_mmap_sees_writes = no ; then + AC_DEFINE(MMAP_MISSES_WRITES, 1, + [Define if you need to call msync after writes.]) +fi]) + +dnl Check whether msync takes three arguments. (It takes three arguments on +dnl Solaris and Linux, two arguments on BSDI.) +AC_DEFUN([INN_FUNC_MSYNC_ARGS], +[AC_CACHE_CHECK(how many arguments msync takes, inn_cv_func_msync_args, +[AC_TRY_COMPILE( +[#include +#include ], + [char *p; int psize; msync (p, psize, MS_ASYNC);], + inn_cv_func_msync_args=3, + inn_cv_func_msync_args=2)]) +if test $inn_cv_func_msync_args = 3 ; then + AC_DEFINE(HAVE_MSYNC_3_ARG, 1, + [Define if your msync function takes three arguments.]) +fi]) + +dnl Now that all the tests are set up, do the work of the mmap tests. +INN_FUNC_MMAP +if test x"$inn_cv_func_mmap" = xyes ; then + AC_CHECK_FUNCS(madvise) + INN_FUNC_MMAP_SEES_WRITES + INN_FUNC_MMAP_NEEDS_MSYNC + INN_FUNC_MSYNC_ARGS +fi + +dnl If AF_UNIX is set in , assume we have Unix domain sockets. +AC_DEFUN([INN_SYS_UNIX_SOCKETS], +[AC_CACHE_CHECK([for Unix domain sockets], inn_cv_sys_unix_sockets, +[AC_EGREP_CPP(yes, +[#include +#ifdef AF_UNIX +yes +#endif], + inn_cv_sys_unix_sockets=yes, + inn_cv_sys_unix_sockets=no)]) +if test $inn_cv_sys_unix_sockets = yes ; then + AC_DEFINE(HAVE_UNIX_DOMAIN_SOCKETS, 1, + [Define if you have unix domain sockets.]) +fi]) +INN_SYS_UNIX_SOCKETS + +dnl Determine the facility for syslog messages. Default to LOG_NEWS for +dnl syslog facility if it's available, but if it's not, fall back on +dnl LOG_LOCAL1. --with-syslog-facility may have already set this. +AC_DEFUN([INN_LOG_FACILITY], +[AC_MSG_CHECKING(log facility for news) +AC_CACHE_VAL(inn_cv_log_facility, +[AC_EGREP_CPP(yes, +[#include +#ifdef LOG_NEWS +yes +#endif], + inn_cv_log_facility=LOG_NEWS, + inn_cv_log_facility=LOG_LOCAL1)]) +if test x"$SYSLOG_FACILITY" = xnone ; then + SYSLOG_FACILITY=$inn_cv_log_facility +fi +AC_MSG_RESULT($SYSLOG_FACILITY) +AC_DEFINE_UNQUOTED(LOG_INN_SERVER, $SYSLOG_FACILITY, + [Syslog facility to use for innd logs.]) +AC_DEFINE_UNQUOTED(LOG_INN_PROG, $SYSLOG_FACILITY, + [Syslog facility to use for INN program logs.]) +AC_SUBST(SYSLOG_FACILITY)]) +INN_LOG_FACILITY + +dnl Clean up our LIBS, just for grins. +LIBS=`echo "$LIBS" | sed 's/^ *//' | sed 's/ */ /g' | sed 's/ *$//'` + +AC_CONFIG_HEADER(include/config.h) +AC_OUTPUT( + Makefile.global + include/paths.h + samples/inn.conf + samples/innreport.conf + samples/newsfeeds + samples/sasl.conf + scripts/inncheck + scripts/innshellvars + scripts/innshellvars.pl + scripts/innshellvars.tcl + scripts/news.daily + support/fixscript + , + chmod +x support/fixscript +) + +dnl Print out some additional information on what to check. +cat < /dev/null ; then + : +else + cat <" to build any of the following programs and then copy +the binary to somewhere on your PATH to use it. For details on what each +program does, see below, as well as the comments at the beginning of each +file (if any). + +In addition to these files, also see the contrib section of the INN FTP +site at for more software designed +to work with INN. + + ------------------------- + +archivegz + + A compressing version of archive, writing out .gz files instead of + plain text files. May not work with the storage API without some + changes to use sm. + +backlogstat + + Prints informations about the current state of innfeed's backlog, if + any. + +backupfeed + + Another version of suck or pullnews that downloads posts from a remote + news server and offers them to the local news server. + +cleannewsgroups + + Performs various cleanups on the newsgroups file. + +count_overview.pl + + Counts the groups in a bunch of Xref records. + +delayer + + Sits in a data stream and delays it by some constant period of time. + Mostly useful for delaying innfeed feeds to allow cancels a chance to + remove articles before innfeed sends them to your peers. See the + beginning of the file for an example of how to use it. + +expirectl + + Automatically builds expire.ctl based on current available disk space + and a template, adjusting the expiration times of groups based on a + weight and the available space. Uses a template expire.ctl.ctl file; + see the end of expirectl.c for a sample. + +findreadgroups + + Scans the news log files and generates a file giving readership counts + by newsgroup. Used by makeexpctl and makestorconf. + +fixhist + + Performs various cleanups and sanity checks on the history database. + +innconfcheck + + Merges your inn.conf settings with the inn.conf man page to make it + easier to be sure that your settings match what you want. Edit this + script to add the correct paths to the man page; see the comments at + the beginning of this script. + +makeexpctl + + Generates an expire.ctl based on what newsgroups are actually read. + Uses data generated by findreadgroups. This script will require + editing before being usable for your server. + +makestorconf + + Generates a storage.conf file putting frequently read newsgroups into + timecaf rather than CNFS. Uses data gefnerated by findreadgroups. + This script will require editing before being usable for your server. + +mkbuf + + Creates a CNFS cycbuff; see the comments at the beginning of + this script. + +mlockfile + + Locks files given on the command line into memory using mlock (only + tested on Solaris). Useful primarily for locking the history files + (history.hash and history.index) into memory on a system with + sufficient memory to speed history lookups in innd. This seems to + help some systems quite a lot and others not at all. + +newsresp + + Opens an NNTP channel to a server and takes a peek at various response + times. Can check the round-trip time and the history lookup time. + See the comments at the beginning of the source for more details. + +pullart + + Attempts to pull news articles out of CNFS cycbuffs. Useful for + emergency recoveries. + +reset-cnfs + + Clears a CNFS cycbuff; see the comments at the beginning of + this script. + +respool + + Takes a list of tokens on stdin and respools them, by retrieving the + article, storing it again, and then calling SMcancel on the previous + instance of the article. Note that after running this program, you'd + need to rebuild the history and overview, since it doesn't update + either. + +showtoken + + Decodes storage API tokens. + +stathist + + Parses and summarizes the log files created by the history profiling + code. + +thdexpire + + A dynamic expire daemon for timehash and timecaf spools. It should + be started along with innd and periodically looks if news spool space + is getting tight, and then frees space by removing articles until + enough is free. It is an adjunct to (not a replacement for) INN's + expire program. + +tunefeed + + Given two active files, attempts to produce a good set of wildmat + patterns for newsfeeds to minimize the number of rejects. For full + documentation, run "perldoc tunefeed". diff --git a/contrib/archivegz.in b/contrib/archivegz.in new file mode 100644 index 0000000..e4f06b7 --- /dev/null +++ b/contrib/archivegz.in @@ -0,0 +1,334 @@ +#!/usr/bin/perl +# Copyright 1999 Stephen M. Benoit, Service Providers of America. +# See notice at end of this file. +# +# Filename: archivegz.pl +# Author: Stephen M. Benoit (benoits@servicepro.com) +# Created: Wed Apr 14 13:56:01 1999 +# Version: $Id: archivegz.in 4329 2001-01-14 13:47:52Z rra $ +# +$RCSID='$Id: archivegz.in 4329 2001-01-14 13:47:52Z rra $ '; + +# Specify command line options, and decode the command line. + +require 'newgetopt.pl'; +require 'newusage.pl'; +@opts = + ( + "help|usage;;print this message", + "version;;print version", + "a=s;;directory to archive in instead of the default", + "f;;directory names will be flattened out", + "i=s;;append one line to the index file for each article (Destination name, Message ID, Subject)", + "m;; Files are copied by making a link. Not applicable, ignored", + "r;;Suppress stderr redirection to /var/log/news/errlog", + "n=s;;the news spool (source) directory (default=/var/spool/news/)", + "t=i;;timeout that separates batches (default 10 seconds)", + ";;input", + # Examples. + # + # "OPT;;Option without an argument", + # "OPT!;;Negatable option without an argument", + # "VAR=T;;Option with mandatory argumet T = s(tring),i(nteger), or f(loat). + # "VAR:T;;Option with optional argument. + # "OPT|AAA|BBB";;AAA and BBB are aliases for OPT", + # "VAR=T@";;Push option argument onto array @opt_VAR" + ); +$ignorecase = 0; +$badopt = !&NGetOpt(&NMkOpts(@opts)); +# $badarg = (@ARGV != 0); +if ($badarg || $badopt || $opt_help) + { + &NUsage($0,0,'',@opts); + exit ($badopt||$badarg); + } +if ($opt_version) {print STDERR "$RCSID\n"; exit 0} + +# -------------------------------------------------------------------- + +# --- constants and defaults --- +$NEWS_ROOT = "/var/spool/news/"; +$NEWS_ERR = "/var/log/news/errlog"; +$NEWS_ARCHIVE = $NEWS_ROOT . "news.archive/"; +$timeout = 10; +if ($opt_t) + { $timeout = $opt_t;} +if ($timeout<1) {$timeout=1;} + +# -------------------------------------------------------------------- + +sub regexp_escape + { + local($data)=@_; + + $data =~ s+\\+\\\\+gi; # replace \ with \\ + $data =~ s+\/+\\\/+gi; # replace / with \/ + + $data =~ s/([\+\*\?\[\]\(\)\{\}\.\|])/\\$1/gi; # replace +*?[](){}.| + + return $data; + } + +sub fhbits { + local(@fhlist) = split(' ',$_[0]); + local($bits); + for (@fhlist) { + vec($bits,fileno($_),1) = 1; + } + $bits; +} + +sub timed_getline + { + my ($fileh,$timeout)=@_; + my $filehandle = (ref($fileh) + ? (ref($fileh) eq 'GLOB' + || UNIVERSAL::isa($fileh, 'GLOB') + || UNIVERSAL::isa($fileh, 'IO::Handle')) + : (ref(\$fileh) eq 'GLOB')); + local(*FILEH) = *$fileh{FILEHANDLE}; + + local($rin,$win,$ein); + local($rout,$wout,$eout); + $rin = $win = $ein = ''; + $rin = fhbits('FILEH'); + $ein = $rin | $win; + local($nfound); + local($offset)=0; + local($accum)=''; + local($done)=0; + local($result); + + $nfound = select($rout=$rin, $wout=$win, $eout=$ein, $timeout); + + if ($nfound>0) + { + + # use sysread() to get characters up to end-of-line (incl.) + while (!$done) + { + $result = sysread(FILEH, $accum, 1, $offset); + if ($result<=0) + { + $done=1; + return undef; + } + + if (substr($accum,$offset,1) eq "\n") + { + $done=1; + } + else + { + $offset+=$result; + } + } + } + return $accum; + } + +# -------------------------------------------------------------------- + +# --- source spool directory --- +if ($opt_n) + { + if ($opt_n !~ /^\//) # absolute path? + { $opt_n = $NEWS_ROOT . $opt_n; } + if ($opt_n !~ /\/$/) # must end with / + { $opt_n .= '/'; } + $NEWS_ROOT = $opt_n; + } + +# --- archive directory --- +if ($opt_a) + { + if ($opt_a !~ /^\//) # absolute path? + { $opt_a = $NEWS_ROOT . $opt_a; } + if ($opt_a !~ /\/$/) # must end with / + { $opt_a .= '/'; } + $NEWS_ARCHIVE = $opt_a; + } + +# --- redirect stderr --- +if (!$opt_r) + { + open(SAVEERR, ">&STDERR"); + open(STDERR, ">>$NEWS_ERR") || die "Can't redirect stderr"; + } + +# --- get input file opened --- +if ($infilename=shift(@ARGV)) + { + if ($infilename !~ /^\//) # absolute filename? + { + $infilename = $NEWS_ROOT . $infilename; + } + + } +else + { + $infilename="-"; + } +open(INFILE,"<$infilename"); + +$done=0; +while (!$done) + { + %sourcefile=(); + %destfile=(); + %destname=(); + + + # --- loop over each line in infile --- + # comments start with '#', ignore blank lines, each line is a filename + while ($srcfile = &timed_getline(INFILE,$timeout)) + { + if ($srcfile =~ /\#/) {$srcfile = $`;} + if ($srcfile =~ /^\s*/) {$srcfile = $';} + if ($srcfile =~ /\s*$/) {$srcfile = $`;} + if ($srcfile) # if a filename survived all that... + { + if ($srcfile !~ /^\//) # absolute filename? + { + $srcfile = $NEWS_ROOT . $srcfile; + } + # $srcfile is now a valid, absolute filename + # split filename into news directory, newsgroup and article number + $artnum=-1; + $remaining=$srcfile; + if ($remaining =~ /\/(\d*)$/) # remove / and article number + { $artnum = $1; $remaining=$`;} + $regex = ®exp_escape($NEWS_ROOT); + if ($remaining =~ /^$regex/) # split off news dir + { $newsdir = $&; $grpdir = $';} + else + { $newsdir = ''; $grpdir = $remaining; } # ... otherwise, grp = dir + $newsgrp = $grpdir; + $newsgrp =~ s/\//\./g; # replace slash (/) with dot (.) + if ($opt_f) + { + $grpdir = "$newsgrp.gz"; + } + else + { $grpdir .= "/archive.gz"; } + $destfile = $NEWS_ARCHIVE . $grpdir; + + # print STDERR "$srcfile --> $newsgrp --> $destfile\n"; + if ($sourcefile{$newsgrp}) {$sourcefile{$newsgrp} .= " ";} + $sourcefile{$newsgrp} .= $srcfile; + $destfile{$newsgrp} = $destfile; + $destname{$newsgrp} = $grpdir; + } + } + + # --- is there anything to do at this time? --- + if (%destfile) + { + + # --- open INDEX --- + if ($opt_i) + { + # make sure directory exists + if ($opt_i =~ /\/[^\/]*$/) + { + $dirbase=$`; + system("mkdir -p $dirbase"); + } + open(INDEX,">>$opt_i"); + } + + # --- make sure that archive file can be written (make parent dirs) --- + if ($destfile{$group} =~ /\/[^\/]*$/) + { + $dirbase=$`; + system("mkdir -p $dirbase"); + } + + # --- process each article --- + foreach $group (keys(%destfile)) + { + # --- gzip the concatenated document, appending archive file --- + open(GZIP, "|gzip -c >> $destfile{$group}") || die "Can't open gzip"; + + # --- concatenate the articles, keeping header info if needed --- + @accum_headers=(); + foreach $srcfile (split(/\s+/, $sourcefile{$group})) + { + # print STDERR "reading $srcfile...\n"; + $this_doc=''; + open(DOC, "<$srcfile"); + while ($line=) + { + $this_doc .= $line; + } + close(DOC); + print GZIP $this_doc; + if ($opt_i) + { + # --- get header information and store it in index + $subject=''; $mesageid=''; $destname=''; + if ($this_doc =~ /Subject:\s*(.*)/) + { $subject = $1; } + if ($subject =~ /^\s*/) {$subject = $';} + if ($subject =~ /\s*$/) {$subject = $`;} + if ($this_doc =~ /Message-ID:\s*(.*)/) + {$messageid = $1; } + if ($messageid =~ /^\s*/) {$messageid = $';} + if ($messageid =~ /\s*$/) {$messageid = $`;} + + print INDEX "$destname{$group} $messageid $subject\n"; + } + } + + close(GZIP); + } + + # --- close index file --- + if ($opt_i) + { + close(INDEX); + } + } + + if (!defined($srcfile)) # file was closed + { + $done=1; + last; # "break" + } + + } + +# --- restore stderr --- +if (!$opt_r) + { + close(STDERR); + open(STDERR,">>&SAVEERR"); + } + +# --- close input file --- +close(INFILE); + + +__END__ +# Local Variables: +# mode: perl +# End: + +# Copyright 1999 Stephen M. Benoit, Service Providers of America (SPA). +# +# Permission to use, copy, modify, and distribute this software and its +# documentation for any purpose without fee is hereby granted without fee, +# provided that the above copyright notice appear in all copies and that both +# that copyright notice and this permission notice appear in supporting +# documentation, and that the name of SPA not be used in advertising or +# publicity pertaining to distribution of the software without specific, +# written prior permission. SPA makes no representations about the +# suitability of this software for any purpose. It is provided "as is" +# without express or implied warranty. +# +# SPA DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL +# SPA BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY +# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN +# AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF +# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. diff --git a/contrib/auth_pass.README b/contrib/auth_pass.README new file mode 100644 index 0000000..6919fb2 --- /dev/null +++ b/contrib/auth_pass.README @@ -0,0 +1,75 @@ +This directory contains sample authorization programs for use with the +'authinfo generic' command in nnrpd. + +The first program in here is from Doug Needham I have successfully +tested this program when connecting to nnrpd by hand, but I've not +taken the time to figure out how to get my newsreader to use +'authinfo generic'. There is no Makefile here and no serious +testing of it, so it's not integrated. If you have success using +it and care to share what you've done. Please drop me a note +(). Thanks. + + +--------------------------------------------------------------------------- + +Replied: Fri, 26 Jul 1996 19:29:17 +0200 +Replied: Douglas Wade Needham +Received: by gw.home.vix.com id UAA05867; Thu, 25 Jul 1996 20:45:27 -0700 (PDT) +Received: (from dneedham@localhost) by dneedham.inhouse.compuserve.com (8.7.4/8.6.9) id XAA21103; Thu, 25 Jul 1996 23:45:25 -0400 (EDT) +From: Douglas Wade Needham +Message-Id: <199607260345.XAA21103@dneedham.inhouse.compuserve.com> +Subject: A sample program for authinfo generic (for inn 1.5) +To: inn-workers@vix.com (INN Gurus/Workers) +Date: Thu, 25 Jul 1996 23:45:25 -0400 (EDT) +Cc: inn@isc.org, brister@vix.com (James A. Brister) +X-Mailer: ELM [version 2.4 PL25] +MIME-Version: 1.0 +Content-Type: multipart/mixed; boundary=%#%record%#% +Status: U + +--%#%record%#% +Content-Type: text/plain; charset=US-ASCII +Content-Transfer-Encoding: 7bit +Content-Length: 1894 + +Hi folks... + +Finally started to get some time to clear some things from my todo list...Here +is a sample program which can be used by "authinfo generic" to validate a user +against the password file on the news host. While not a great example, it does +demonstrate how you can write an authentication program. All I ask is that +credit be given. + +A couple of notes that I have found out about these programs for those of you +who may be interested in writing your own... + +1) These programs have stdin and stdout connected all the way back to the + reader, so they can carry on a dialog in whatever fashion they want to + with the user's news reader. This can include passing Kerberos tickets, + encrypted or hashed passwords, or doing a challenge-response type session + for authenticating the user rather than passing the password in clear-text + across the network. + +2) Regardless of the outcome, the authentication program must send NNRPD a + record such as is found in nnrp.access by writing it to stderr. + +3) Successful authentication is indicated by a zero exit status, and + unsuccessful authentication is indicated by a non-zero exit status. + +4) Need I say it (again)...these programs can be a security hole unless care is + taken to avoid SUID programs and those that transmit/recieve passwords in + the clear (especially those that use login passwords). We should give some + thought to doing a similiar program for Kerberos authentication (what sort + of instance should we use???) and other authentication methods such as + Compuserve's Distributed Authentication (guess I should do this one once the + standard is finialized with the IETF 8) ). + +Also, a question for the list as a whole... what readers easily support +authinfo generic (including running a program at the reader's end to do things +like challenge-response)??? + +Well...here it is...enjoy 8)... + +- doug + +#### See auth_pass.c ##### diff --git a/contrib/auth_pass.c b/contrib/auth_pass.c new file mode 100644 index 0000000..7ecf0a1 --- /dev/null +++ b/contrib/auth_pass.c @@ -0,0 +1,163 @@ +/* + * auth_pass.c ( $Revision: 6141 $ ) + * + * Abstract: + * + * This module is the complete source for a sample "authinfo generic" + * program. This program takes a user's login name and password + * (supplied either as arguments or as responses to prompts) and + * validates them against the contents of the password database. + * + * If the user properly authenticates themselves, a nnrp.auth style + * record indicating the user's authenticated login and permitting + * reading and posting to all groups is output on stderr (for reading by + * nnrpd) and the program exits with a 0 status. If the user fails to + * authenticate, then a record with the attempted login name and no + * access is output on stderr and a non-zero exit status is returned. + * + * Exit statuses: + * 0 Successfully authenticated. + * 1 getpeername() failed, returned a bad address family, or + * gethostbyaddr() failed. + * 2 Entry not found in password file. + * 3 No permission to read passwords, or password field is '*'. + * 4 Bad password match. + * + * Environment: + * Run by nnrpd with stdin/stdout connected to the reader and stderr + * connected back to nnrpd. This program will need to be run as suid + * root on systems where passwords are stored in a file readable only by + * root. + * + * Written 1996 July 6 by Douglas Wade Needham (dneedham@oucsace.cs.ohiou.edu). + * + */ + +#include "config.h" +#include "clibrary.h" +#include "portable/socket.h" +#include +#include + + +main(int argc, char** argv) +/*+ + * Abstract: + * Main routine of the program, implementing all prompting, validation, + * and status returns. + * + * Arguments: + * argc Argument count. + * argv Null terminated argument vector. + * + * Returns: + * Exits according to program status values. + * + * Variables: + * hp Pointer to host entry. + * length General integer variable + * password Password given by user. + * peername Hostname of the peer. + * pwd Pointer to entry from passwd file. + * sin Socket address structure. + * username User's login name. + */ +{ + struct hostent * hp; + int length; + char password[256]; + char peername[1024]; + struct passwd * pwd; + struct sockaddr_in sin; + char username[32]; + + /* + * Get the user name and password if needed. + */ + if (argc<2) { + fprintf(stdout, "Username: "); fflush(stdout); + fgets(username, sizeof(username), stdin); + } else { + strlcpy(username, argv[1], sizeof(username)); + } + if (argc<3) { + fprintf(stdout, "Password: "); fflush(stdout); + fgets(password, sizeof(password), stdin); + } else { + strlcpy(password, argv[2], sizeof(password)); + } + + /* + * Strip CR's and NL's from the end. + */ + length = strlen(username)-1; + while (username[length] == '\r' || username[length] == '\n') { + username[length--] = '\0'; + } + length = strlen(password)-1; + while (password[length] == '\r' || password[length] == '\n') { + password[length--] = '\0'; + } + + /* + * Get the hostname of the peer. + */ + length = sizeof(sin); + if (getpeername(0, (struct sockaddr *)&sin, &length) < 0) { + if (!isatty(0)) { + fprintf(stderr, "cant getpeername()::%s:+:!*\n", username); + exit(1); + } + strlcpy(peername, "stdin", sizeof(peername)); + } else if (sin.sin_family != AF_INET) { + fprintf(stderr, "Bad address family %ld::%s:+:!*\n", + (long)sin.sin_family, username); + exit(1); + } else if ((hp = gethostbyaddr((char *)&sin.sin_addr, sizeof(sin.sin_addr), AF_INET)) == NULL) { + strlcpy(peername, inet_ntoa(sin.sin_addr), sizeof(peername)); + } else { + strlcpy(peername, hp->h_name, sizeof(peername)); + } + + /* + * Get the user name in the passwd file. + */ + if ((pwd = getpwnam(username)) == NULL) { + + /* + * No entry in the passwd file. + */ + fprintf(stderr, "%s::%s:+:!*\n", peername, username); + exit(2); + } + + /* + * Make sure we managed to read in the password. + */ + if (strcmp(pwd->pw_passwd, "*")==0) { + + /* + * No permission to read passwords. + */ + fprintf(stderr, "%s::%s:+:!*\n", peername, username); + exit(3); + } + + /* + * Verify the password. + */ + if (strcmp(pwd->pw_passwd, crypt(password, pwd->pw_passwd))!=0) { + + /* + * Password was invalid. + */ + fprintf(stderr, "%s::%s:+:!*\n", peername, username); + exit(4); + } + + /* + * We managed to authenticate the user. + */ + fprintf(stderr, "%s:RP:%s:+:*\n", peername, username); + exit(0); +} diff --git a/contrib/backlogstat.in b/contrib/backlogstat.in new file mode 100644 index 0000000..70da166 --- /dev/null +++ b/contrib/backlogstat.in @@ -0,0 +1,118 @@ +#!/usr/bin/perl +# fixscript will replace this line with require innshellvars.pl + +# backlogstat - display backlog to sites +# based on bklog by bill davidsen + +# breaks if backlog-directory in innfeed.conf is not "innfeed" +my $dir = "$inn::pathspool/innfeed"; +my $Revision = '1.8'; + +use strict; +use warnings; + +use Getopt::Std; +use vars qw($opt_H $opt_h $opt_n $opt_t $opt_k $opt_S $opt_d); +$| = 1; + +# option processing +&getopts('HhntkS:d:') || &Usage; +&Usage if $opt_h; + +# open the directory; +$dir = $opt_d if $opt_d; +print "$opt_d\n"; +chdir($dir) or die "Can't cd to $dir"; +opendir(DIR, ".") or die "Can't open dir"; + +my %nodes; +while (my $name = readdir(DIR)) { + # must be a file, correct name, non-zero size + my $size; + next unless -f $name; + next unless ($size = -s $name); + next unless $name =~ m/.*\.(in|out)put/; + my $io = $1; + (my $nodename = $name) =~ s/\..*//; + + # check for only some sites wanted + next if ($opt_S && $nodename !~ /^${opt_S}.*/); + # here we do the counts if asked + if ($opt_n) { + # open the file and count lines + if (open(IN, "<$name")) { + if ($name =~ m/.*\.input/) { + my $offset = + 0; + seek(IN, $offset, 0); + } + $size = 0; + for ($size = 0; ; ++$size) {}; + close IN; + } + } else { + # get the offset on .input files + if ($name =~ m/.*\.input/ && open(IN, "<$name")) { + my $offset = + 0; + $size -= $offset; + close IN; + } + } + $nodes{$nodename} = () unless defined $nodes{$nodename}; + $nodes{$nodename}->{$io} = ( $opt_k ? $size / 1024 : $size ); +} +closedir DIR; + +# output the data for each node +if (my $numnodes = keys %nodes) { + if ($opt_H) { + if ($opt_n) { + print " <---------- posts ----------->\n"; + } else { + print " <---------- bytes ----------->\n"; + } + } + my $ofmt; + if ($opt_k) { + print " input(k) output(k) total(k) Feed Name\n" if $opt_H; + $ofmt = ( $opt_n ? "%10.2f" : "%10.1f" ); + } else { + print " input output total Feed Name\n" if $opt_H; + $ofmt = "%10d"; + } + for my $node (sort keys %nodes) { + my $hash = $nodes{$node}; + my $size_in = $hash->{in} || 0; + my $size_out = $hash->{out} || 0; + my $size_tot = $size_in + $size_out; + printf "${ofmt} ${ofmt} ${ofmt} %s\n", + $size_in, $size_out, $size_tot, $node; + } +} else { + print "NO backlog!\n"; +} + +exit 0; + +sub Usage +{ + print "\n" + . "bklog - print innfeed backlog info - v$Revision\n" + . "\n" + . "Format:\n" + . " bklog [ options ]\n" + . "\n" + . "Options:\n" + . " -H output a header at the top of the output\n" + . " -k scale all numbers in k (1024) units\n" + . " -n count number of arts, not bytes of backlog filesize\n" + . " Note: this may be SLOW for large files!\n" + . " -Sxx Display only site names starting with xx\n" + . " -d dir Use \"dir\" instead of \$pathspool/innfeed\n" + . "\n" + . " -h HELP - this is all, you got it!\n" + . "\n"; + + exit 1; +} + + diff --git a/contrib/backupfeed.in b/contrib/backupfeed.in new file mode 100644 index 0000000..815fd74 --- /dev/null +++ b/contrib/backupfeed.in @@ -0,0 +1,249 @@ +#! /usr/bin/perl -w +# +# Date: 26 Jun 1999 17:59:00 +0200 +# From: kaih=7Jbfpa7mw-B@khms.westfalen.de (Kai Henningsen) +# Newsgroups: news.software.nntp +# Message-ID: <7Jbfpa7mw-B@khms.westfalen.de> +# Subject: Re: Version of pullnews that support authentication? +# +# [...] +# I'm appending a script I wrote (called backupfeed.pl for some reason). Hmm +# ... oh, I hereby put that into the public domain. Use as you see fit. If +# it breaks, you get to keep all the parts. +# +# Needs the newer Net::NNTP versions for the MODE READER fix. +# +# This thing is both faster and uses far less memory than suck. And it +# inserts a predictable Path: entry (in case the host you pull from +# doesn't). +# +# It's in production use as a backup to regular feeds, so it specifically +# fetches only old articles unless you say -p 1 (default is -p 0.6666...). + +use strict; +use Net::NNTP; +use DB_File; +use Data::Dumper; +use Getopt::Std; +use vars qw($Group $Host $Pos $Rc %Rc $Starttime + $opt_S $opt_T $opt_d $opt_p $opt_s $opt_t); + +my ( @groups, $localhost, $remotehost, $accepted, $rejected, $lockf, + $history, $acc, $rej, $his, @parms, $from, $to, $art, %err ); + +$| = 1; + +$opt_S = 10; # sleep between groups +$opt_T = 10000; # max running time +$opt_d = 0; # debugging +$opt_p = 2/3; # how many articles to fetch +$opt_s = 0; # sleep between articles +$opt_t = 0; # timeout for NNTP connections +getopts("dt:p:s:S:T:"); + +die <> /var/log/news/backupfeed.$Host" or die "normal log: $!"; +autoflush LOG; + +open ERR, ">> /var/log/news/backupfeed.$Host.errors" or die "error log: $!"; +autoflush ERR; + +print LOG scalar(localtime), " $0 starting for $Host\n"; +print ERR scalar(localtime), " $0 starting for $Host\n"; + +open GUP, $GroupsWanted or die "Groups Wanted: $GroupsWanted: $!"; +@groups = ; +close GUP; + +$Starttime = time; + +$localhost = Net::NNTP->new("localhost", "Debug", $opt_d, "Timeout", $opt_t, "Reader", 0) or die "localhost: $!"; + +$remotehost = Net::NNTP->new($Host, "Debug", $opt_d, "Timeout", $opt_t) or die "remotehost: $!"; +$remotehost->reader; +&lifecheck($remotehost, $Host); +$remotehost->authinfo($userid, $password) if ($userid); +&lifecheck($remotehost, $Host); + +tie %Rc, "DB_File", "$Host.bfrc" or die "$Host.bfrc: $!"; + +$SIG{HUP} = 'IGNORE'; +$SIG{INT} = \&sig; +$SIG{TERM} = \&sig; + +my $restart = $Rc{'=restart='}; +$restart='' unless ($restart); + +my @before = grep $_ lt $restart, @groups; +my @after = grep $_ ge $restart, @groups; +@groups = ( @after, @before ); + +($acc, $rej, $his) = (0, 0, 0); +foreach $Group (@groups) { + chomp $Group; + (@parms = $remotehost->group($Group)) or next; + &lifecheck($remotehost, $Host); + next if ($#parms < 3); + $Rc{'=restart='} = $Group; + print LOG scalar(localtime), " \t<$Group>\n"; + $Rc{$Group} = 0 + if (!defined $Rc{$Group}); + $Rc{$Group} = 0 + if (!$Rc{$Group}); + $from = $parms[1]; + $to = $parms[2]; + $to = $from + ($to - $from) * $opt_p; + if ($to < $Rc{$Group}) { + print LOG scalar(localtime), " \t watermark high, reset\n"; + $Rc{$Group} = $from-1; + } + $Rc{$Group} = $from-1 + if ($from > $Rc{$Group}); +# print LOG scalar(localtime), " \t\t",$Rc{$Group}+1,"-$to\n"; + $remotehost->nntpstat($Rc{$Group}+1); +# print LOG scalar(localtime), " \t\t",$remotehost->message,"\n"; + &lifecheck($remotehost, $Host); + $art = $remotehost->nntpstat; + &lifecheck($remotehost, $Host); + $remotehost->message =~ /^(\d+)/; + $Pos = $1; + $accepted=0; + $rejected=0; + $history=0; + &offer($art) + if ($art); + while ($art = $remotehost->next) { + &lifecheck($remotehost, $Host); + $remotehost->message =~ /^(\d+)/; + $Pos = $1; + last + if ($Pos > $to); + &offer($art); + } + &lifecheck($remotehost, $Host); + print LOG scalar(localtime), " \taccepted=$accepted rejected=$rejected history=$history\n"; + $acc+=$accepted; + $rej+=$rejected; + $his+=$history; + $accepted=0; + $rejected=0; + $history=0; + (tied %Rc)->sync; + sleep $opt_S if $opt_S; +} + +untie %Rc; + +$localhost->quit; + +$remotehost->quit; + +&end0; + +sub offer +{ + system("echo $Host $Group $Pos > $Host.status"); + if ($localhost->ihave($_[0])) { + &lifecheck($localhost, 'localhost'); + my $article = $remotehost->article; + if (ref $article) { + #open ART1, "> art1"; + #print ART1 @$article; + #close ART1; + my $i = 0; + while ($i <= @$article && !($$article[$i] =~ /^Path:/i)) { + $i++; + } + $$article[$i] =~ s/^(Path:\s*)/$1NNTP-from-$Host!/i; + #open ART2, "> art2"; + #print ART2 @$article; + #close ART2; + #exit; + $localhost->datasend($article); + if ($localhost->dataend) { + $accepted++; + } + else { + $rejected++; + $err{" local " . $localhost->code . " " . $localhost->message} ++; + } + $Rc{$Group} = $Pos; + (tied %Rc)->sync; + } + else { + $err{" remote " . $remotehost->code . " " . $remotehost->message} ++; + } + sleep $opt_s if $opt_s; + } + else { + if ($localhost->status == 4) { + if ($localhost->code == 435) { + $err{" local " . $localhost->code . " " . $localhost->message} ++; + } + else { + $err{" local " . $localhost->code . " " . $localhost->message} ++; + print LOG scalar(localtime), " local ", $localhost->code, " ", $localhost->message, "\n"; + &end; + } + } + &lifecheck($localhost, 'localhost'); + $history++; + $Rc{$Group} = $Pos; + } +} + +sub lifecheck +{ + unless (defined $_[0]->code and $_[0]->code > 0) { + print LOG scalar(localtime), " Connection to $_[1] dropped\n"; + print ERR scalar(localtime), " Connection to $_[1] dropped\n"; + &end; + } + #print "time=",time," starttime=$Starttime\n"; + kill 'TERM', $$ if time-$Starttime > $opt_T; +} + +sub sig +{ + print LOG scalar(localtime), " Caught sig: ", Data::Dumper::Dumper(@_), "\n"; + print ERR scalar(localtime), " Caught sig: ", Data::Dumper::Dumper(@_), "\n"; + &end; +} + +sub end +{ + $acc+=$accepted; + $rej+=$rejected; + $his+=$history; + &end0; +} + +sub end0 +{ + print LOG scalar(localtime), " $0 $Host accepted=$acc rejected=$rej history=$his\n"; + foreach my $e (sort keys %err) { + print ERR $err{$e}, $e, "\n"; + } + print ERR scalar(localtime), " $0 $Host accepted=$acc rejected=$rej history=$his\n"; + close LOG; + close ERR; + unlink $lockf; + exit 0; +} diff --git a/contrib/cleannewsgroups.in b/contrib/cleannewsgroups.in new file mode 100644 index 0000000..daa406b --- /dev/null +++ b/contrib/cleannewsgroups.in @@ -0,0 +1,45 @@ +#! /usr/bin/perl +# fixscript will replace this line with require innshellvars.pl + +# This script cleans the newsgroups file: +# * Groups no longer in the active file are removed. +# * Duplicate entries are removed. The last of a set of duplicates +# is the one retained. That way, you could simply append the +# new/revised entries from a docheckgroups run and then this script +# will remove the old ones. +# * Groups with no description are removed. +# * Groups matching the $remove regexp are removed. + +$remove=''; +# $remove='^alt\.'; + +open ACT, $inn::active or die "Can't open $inn::active: $!\n"; +while() { + ($group) = split; + $act{$group} = 1 unless($remove ne "" && $group =~ /$remove/o); +} +close ACT; + +open NG, $inn::newsgroups or die "Can't open $inn::newsgroups: $!\n"; +while() { + chomp; + ($group, $desc) = split /\s+/,$_,2; + next unless(defined $act{$group}); + + next if(!defined $desc); + next if($desc =~ /^[?\s]*$/); + next if($desc =~ /^no desc(ription)?(\.)?$/i); + + $hist{$group} = $desc; +} +close NG; + +open NG, ">$inn::newsgroups.new" or die "Can't open $inn::newsgroups.new for write: $!\n"; +foreach $group (sort keys %act) { + if(defined $hist{$group}) { + print NG "$group\t$hist{$group}\n" or die "Can't write: $!\n"; + } +} +close NG or die "Can't close: $!\n"; + +rename "$inn::newsgroups.new", $inn::newsgroups or die "Can't rename $inn::newsgroups.new to $inn::newsgroups: $!\n"; diff --git a/contrib/count_overview.pl b/contrib/count_overview.pl new file mode 100755 index 0000000..910938e --- /dev/null +++ b/contrib/count_overview.pl @@ -0,0 +1,27 @@ +#!/usr/local/bin/perl +# +# count_overview.pl: Count the groups in a bunch of xref records. + +while (<>) { + +chop; +@xreflist = split(/\t/); # split apart record + +$_ = $xreflist[$#xreflist]; # xref is last. + +@xreflist = reverse(split(/ /)); #break part xref line. + +pop @xreflist; # get rid xref header +pop @xreflist; + +while ($current = pop @xreflist) { + ($current) = split(/:/,$current); #get newsgroup name + $groups{$current}++; #tally +} + +} + +# display accumulated groups and counts. +foreach $current (sort keys %groups) { + printf "%-50s\t%5d\n", $current, $groups{$current}; +} diff --git a/contrib/delayer.in b/contrib/delayer.in new file mode 100644 index 0000000..4528d96 --- /dev/null +++ b/contrib/delayer.in @@ -0,0 +1,71 @@ +#!/usr/bin/perl +# -*- perl -*- +# +# delay lines for N seconds. +# +# primarily meant to be used with INN to generate a delayed feed with innfeed. +# +# put it into your newsfeeds file like +# +# innfeed-delayed!\ +# :!*\ +# :Tc,Wnm*,S16384:/usr/local/news/bin/delayer 60 \ +# /usr/local/news/bin/startinnfeed -c innfeed-delayed.conf +# +# +# +# done by christian mock sometime in july 1998, +# and put into the public domain. +# +$delay = shift || die "usage: $0 delay prog-n-args\n"; + +$timeout = $delay; +$eof = 0; + +open(OUT, "|" . join(" ", @ARGV)) || die "open |prog-n-args: $!\n"; + +#select(OUT); +#$| = 1; +#select(STDOUT); + +$rin = ''; +vec($rin,fileno(STDIN),1) = 1; + +while(!$eof || $#queue >= 0) { + if(!$eof) { + ($nfound,$timeleft) = + select($rout=$rin, undef, undef, $timeout); + } else { + sleep($timeout); + } + $now = time(); $exp = $now + $delay; + + if(!$eof && vec($rout,fileno(STDIN),1)) { + $line = ; + if(!defined $line) { # exit NOW! + foreach(@queue) { + s/^[^:]+://g; + print OUT; + } + close(OUT); + sleep(1); + exit; + } + push(@queue, "$exp:$line"); + } + + if($#queue < 0) { + undef $timeout; + next; + } + + ($first, $line) = split(/:/, $queue[0], 2); + while($#queue >= 0 && $first <= $now) { + print OUT $line; + shift(@queue); + ($first, $line) = split(/:/, $queue[0], 2); + } + $timeout = $first - $now; + +} + diff --git a/contrib/expirectl.c b/contrib/expirectl.c new file mode 100644 index 0000000..2224ad7 --- /dev/null +++ b/contrib/expirectl.c @@ -0,0 +1,306 @@ +/* + * EXPIRECTL.C + * + * expirectl + * + * This program uses expire.ctl.ctl as input; please see the end of this + * file for an example of such a file. + */ + +/* + * Date: Mon, 21 Nov 1994 12:29:52 -0801 + * From: Matthew Dillon + * Message-Id: <199411212030.MAA21835@apollo.west.oic.com> + * To: rsalz@uunet.uu.net + * Subject: Re: INN is great, bug fix for BSDI + * + * [...] + * Oh, while I'm at it, I also wrote a cute program that builds the + * expire.ctl file dynamically based on available space. Feel free + * to include this in the dist (or not) as you please. + * + * Basically, the expirectl programs determines the amount of disk blocks + * and inodes free in the spool and creates a new expire.ctl file based + * on an expire.ctl.ctl template. The template specifies expiration times + * as a fraction of nominal. expirectl adjusts the nominal expiration + * up or down based on available disk space. + * + * The idea is to make expiration as hands off as possible. I tested + * it on a smaller spool and it appeared to work fine. Currently it + * only works for single-partition news spools tho. The above spool + * will not really exercise the program for another 14 days or so :-). + */ + + +#include +#include +#include +#include +#include +#include + +#define EXPIRE_CTL_DIR "/home/news" +#define NEWS_SPOOL "/home/news/spool/news/." + +#define EXPIRE_DAYS EXPIRE_CTL_DIR "/expire.days" +#define EXPIRE_CTL EXPIRE_CTL_DIR "/expire.ctl" +#define EXPIRE_CTL_CTL EXPIRE_CTL_DIR "/expire.ctl.ctl" + +void +main(int ac, char **av) +{ + struct statfs sfs; + long minFree = 100 * 1024 * 1024; + long minIFree = 20 * 1024; + long expireDays = 2; + time_t expireIncTime = time(NULL) - 24 * 60 * 60; + int modified = 0; + int verbose = 0; + + /* + * options + */ + + { + int i; + + for (i = 1; i < ac; ++i) { + char *ptr = av[i]; + + if (*ptr == '-') { + ptr += 2; + switch(ptr[-1]) { + case 'v': + verbose = 1; + break; + case 'f': + modified = 1; + break; + case 'n': + modified = -1; + break; + case 'b': + minFree = strtol(((*ptr) ? ptr : av[++i]), &ptr, 0); + if (*ptr == 'k') + minFree *= 1024; + if (*ptr == 'm') + minFree *= 1024 * 1024; + break; + case 'i': + minIFree = strtol(((*ptr) ? ptr : av[++i]), NULL, 0); + if (*ptr == 'k') + minIFree *= 1024; + if (*ptr == 'm') + minIFree *= 1024 * 1024; + break; + default: + fprintf(stderr, "bad option: %s\n", ptr - 2); + exit(1); + } + } else { + fprintf(stderr, "bad option: %s\n", ptr); + exit(1); + } + } + } + + if (statfs("/home/news/spool/news/.", &sfs) != 0) { + fprintf(stderr, "expirectl: couldn't fsstat /home/news/spool/news/.\n"); + exit(1); + } + + /* + * Load /home/news/expire.days + */ + + { + FILE *fi; + char buf[256]; + + if ((fi = fopen(EXPIRE_DAYS, "r")) != NULL) { + while (fgets(buf, sizeof(buf), fi) != NULL) { + if (strncmp(buf, "time", 4) == 0) { + expireIncTime = strtol(buf + 4, NULL, 0); + } else if (strncmp(buf, "days", 4) == 0) { + expireDays = strtol(buf + 4, NULL, 0); + } + } + fclose(fi); + } else { + if (modified >= 0) + modified = 1; + printf("creating %s\n", EXPIRE_DAYS); + } + } + + /* + * print status + */ + + if (verbose) { + printf("spool: %4.2lfM / %3.2lfKinode free\n", + (double)sfs.f_fsize * (double)sfs.f_bavail / (1024.0 * 1024.0), + (double)sfs.f_ffree / 1024.0 + ); + printf("decrs: %4.2lfM / %3.2lfKinode\n", + (double)(minFree) / (double)(1024*1024), + (double)(minIFree) / (double)(1024) + ); + printf("incrs: %4.2lfM / %3.2lfKinode\n", + (double)(minFree * 2) / (double)(1024*1024), + (double)(minIFree * 2) / (double)(1024) + ); + } + + /* + * Check limits, update as appropriate + */ + + { + double bytes; + long inodes; + + bytes = (double)sfs.f_fsize * (double)sfs.f_bavail; + inodes = sfs.f_ffree; + + if (bytes < (double)minFree || inodes < minIFree) { + if (--expireDays <= 0) { + expireDays = 1; + expireIncTime = time(NULL) - 24 * 60 * 60; + } + if (modified >= 0) + modified = 1; + printf("decrement expiration to %d days\n", expireDays); + } else if (bytes >= (double)minFree * 2.0 && inodes >= minIFree * 2) { + long dt = (long)(time(NULL) - expireIncTime); + + if (dt >= 60 * 60 * 24 || dt < -60) { + ++expireDays; + expireIncTime = time(NULL); + if (modified >= 0) + modified = 1; + printf("increment expiration to %d days\n", expireDays); + } else { + printf("will increment expiration later\n"); + } + } else if (verbose) { + printf("expiration unchanged: %d\n", expireDays); + } + } + + /* + * Write EXPIRE_CTL file from EXPIRE_CTL_CTL template + */ + + if (modified > 0) { + FILE *fi; + FILE *fo; + + if ((fi = fopen(EXPIRE_CTL_CTL, "r")) != NULL) { + if ((fo = fopen(EXPIRE_CTL ".tmp", "w")) != NULL) { + char sbuf[2048]; + char dbuf[4096]; + + while (fgets(sbuf, sizeof(sbuf), fi) != NULL) { + char *base = sbuf; + char *sptr; + char *dptr = dbuf; + + while ((sptr = strchr(base, '[')) != NULL) { + double d; + int m = 0; + + bcopy(base, dptr, sptr - base); + dptr += sptr - base; + base = sptr; + + d = strtod(sptr + 1, &sptr); + if (*sptr == '/') + m = strtol(sptr + 1, &sptr, 0); + if (*sptr == ']') { + long v = (long)((double)expireDays * d + 0.5); + if (v < 1) + v = 1; + if (v < m) + v = m; + sprintf(dptr, "%d", v); + dptr += strlen(dptr); + ++sptr; + } + base = sptr; + } + strcpy(dptr, base); + fputs(dbuf, fo); + } + fclose(fo); + if (rename(EXPIRE_CTL ".tmp", EXPIRE_CTL) != 0) { + fprintf(stderr, "rename(%s,%s): %s\n", + EXPIRE_CTL ".tmp", + EXPIRE_CTL, + strerror(errno) + ); + } + } + fclose(fi); + } + } + + /* + * Write EXPIRE_DAYS file + */ + + if (modified > 0) { + FILE *fo; + + if ((fo = fopen(EXPIRE_DAYS, "w")) != NULL) { + fprintf(fo, "time 0x%08lx\n", expireIncTime); + fprintf(fo, "days %d\n", expireDays); + fclose(fo); + } else { + fprintf(stderr, "unable to create %s\n", EXPIRE_DAYS); + } + } + exit(0); +} + + +/* + +# Start of sample expire.ctl.ctl file. + +# EXPIRE.CTL.CTL (EXPIRE.CTL GENERATED FROM EXPIRE.CTL.CTL !!!) +# +# The expire.ctl file is generated by the expirectl program from the +# expire.ctl.ctl file. The expirectl program calculates the proper +# expiration based on the number of free inodes and free bytes available. +# +# This file is exactly expire.ctl but with the multiplier [N] replaced by +# a calculated value, where a multiplier of '1' nominally fills the whole +# disk. +# +# Any field [N] is substituted after being multiplied by the expiration +# time (in days). A integer minimum can also be specified with a slash, +# as in [N/minimum]. +# +# expirectl is normally run just after expire is run. Note that expirectl +# isn't very useful for the case where you are 'catching up' on news after +# a long period of downtime UNLESS you use the -p option to expire. + +/remember/:[1.2/20] + +## Keep for 1-10 days, allow Expires headers to work. +# +*:A:1:[1.0]:[6.0] +*.advocacy:A:1:[0.5]:[2.0] +alt.binaries.pictures.erotica:A:1:[0.8]:[2.0] + +# permanent, semi-permanent +# +best.intro:A:never:never:never +best.announce:A:5:60:120 +best.general:A:never:never:never +best.bugs:A:never:never:never + +# End of sample expire.ctl.ctl file. + +*/ diff --git a/contrib/findreadgroups.in b/contrib/findreadgroups.in new file mode 100644 index 0000000..4c5e8ff --- /dev/null +++ b/contrib/findreadgroups.in @@ -0,0 +1,38 @@ +#!/usr/local/bin/perl +# fixscript will replace this line with require innshellvars.pl + +# Keep track of which groups are currently being read. Takes logfile input +# on stdin. +$readfile="$inn::newsetc/readgroups"; + +$curtime = time; +$oldtime = $curtime - 30 * 86400; # 30 days in the past + +if (open(RDF, $readfile)) { + while () { + chop; + @foo=split(/ /); # foo[0] should be group, foo[1] lastreadtime + if ($foo[1] < $oldtime) { + next; # skip entries that are too old. + } + $groups{$foo[0]} = $foo[1]; + } + close(RDF); +} + +# read input logs. +while (<>) { + next unless /nnrpd/; + next unless / group /; + chop; + @foo = split(/ +/); + # group name is in the 8th field. + $groups{$foo[7]} = $curtime; +} + +open(WRF, ">$readfile") || die "cannot open $readfile for write.\n"; +foreach $i (keys %groups) { + print WRF $i, " ", $groups{$i}, "\n"; +} + +exit(0); diff --git a/contrib/fixhist b/contrib/fixhist new file mode 100755 index 0000000..0541a00 --- /dev/null +++ b/contrib/fixhist @@ -0,0 +1,89 @@ +#!/usr/local/bin/perl +# +# history database sanity checker +# David Barr +# version 1.4 +# w/mods from: hucka@eecs.umich.edu +# Katsuhiro Kondou +# version 1.1 +# Throw away history entries with: +# malformed lines (too long, contain nulls or special characters) +# +# INN Usage: +# ctlinnd throttle 'fixing history' +# ./fixhist history.n +# makedbz -s `wc -l ) { + chop; + ($msgid,$dates,$arts,$xtra) = split('\t'); + if ($xtra) { + &tossit(); # too many fields + next; + } + if (!($dates) && (($arts) || ($xtra))) { + &tossit(); # if not date field, then the rest + next; # should be empty + } + if (length($msgid) >= $MAXKEYLEN) { + &tossit(); # message-id too long + next; + } + if ($msgid !~ /^<[^<> ]*>$/) { + if ($msgid =~ /^\[[0-9A-F]{32}\]$/) { + if ($arts ne "") { + if ($arts =~ /^\@[0-9A-F]{56}\@$/) { + $arts =~ s/^\@([0-9A-F]{36})([0-9A-F]{20})\@$/\@${1}\@/; + print "$msgid\t$dates\t$arts\n"; + next; + } + if ($arts !~ /^\@[0-9A-F]{36}\@$/) { + &tossit(); + next; + } + } + } else { + &tossit(); # malformed msg-ids + next; + } + } else { + if ($arts ne "" && ($arts !~ /[^\/]*\/[0-9]*/)) { + &tossit(); # malformed articles list + next; + } + } + if (/[\000-\010\012-\037\177-\237]/) { # non-control chars except tab + &tossit(); # illegal chars + next; + } + if ($dates) { + if ($dates =~ /[^\d~\-]/) { # rudimentary check + &tossit(); # full check would be too slow + next; + } + } + print "$_\n"; + $count++; + $0 = "history line $./$count" if $. % 50000 == 0; +} +print STDERR "Done. Now run:\nmakedbz -s $count -f history.n\n"; + +sub tossit { + print STDERR "$_\n"; +} diff --git a/contrib/innconfcheck b/contrib/innconfcheck new file mode 100755 index 0000000..83a19d0 --- /dev/null +++ b/contrib/innconfcheck @@ -0,0 +1,125 @@ +#!/bin/ksh + +### INNCONFcheck v1.1 + +### Revision history: +# v1.0 B. Galliart (designed to work with 2.3 inn.conf man page) +# v1.1 B. Galliart (optional support for using inn.conf POD src instead) + +### Description: +# This script is written to inner-mix the inn.conf settings with the +# documentation from the inn.conf man page. The concept was shamelessly +# ripped off of a CGI application provided at Mib Software's Usenet Rapid +# Knowledge Transfer (http://www.mibsoftware.com/userkt/inn2.0/). + +# The idea is that a news administrator usually must go through the +# task of reading the inn.conf man page in parallel with the inn.conf +# inn.conf to confirm that the settings are set as desired. Manually +# matching up the two files can become troublesome. This script should +# make the task easier and hopefully reduce the chance a misconfiguration +# is missed. + +### Known bugs: +# - Is very dependent on the format of the man page. It is know NOT to +# work with the inn.conf man pages written before INN 2.3 and may +# require minor rewriting to address future revisions of inn.conf +# Note: this known bug is addressed via the "EDITPOD" option below +# but is not enabled by default (details explained below). +# +# - SECURITY! While taken from the concept of a CGI script, it is not +# intended to be a CGI script itself. It is *assumed* that the +# inn.conf file is provided by a "trusted" source. + +### License: this script is provided under the same terms as the majority +# of INN 2.3.0 as stated in the file "inn-2.3.0/LICENSE" + +### Warrenty/Disclaimer: There is no warrenty provided. For details, please +# refer to the file "inn-2.3.0/LICENSE" from the INN 2.3 package + + ################ + +### The User Modifiable Parameters/Settings: + +# INNCONF should be set to the actual location of the inn.conf file +INNCONF=/usr/local/news/etc/inn.conf + +# INNCONFMAN should be set to the location of the inn.conf man page +INNCONFMAN=/usr/local/news/man/man5/inn.conf.5 + +# INNCONFPOD should be set to the location of the inn.conf POD source +# INNCONFPOD=/usr/local/src/inn-2.3.0/doc/pod/inn.conf.pod +INNCONFPOD=/usr/local/news/man/man5/inn.conf.pod + +# NROFF should be set to an approbate program for formating the man page +# this could be the vendor provided nroff, the FSF's groff (which could be +# used for producing PostScript output) or Earl Hood's man2html from +# http://www.oac.uci.edu/indiv/ehood/man2html.html + +# NROFF=man2html +NROFF="nroff -man" + +# Pager should be set to an approbate binary for making the output +# readable in the user's desired method. Possible settings include +# page, more, less, ghostview, lynx, mozilla, lpr, etc. If no pager +# application is desire then by setting it to "cat" will cause the output +# to continue on to stdout. +PAGER=less + +# By default the script uses the inn.conf man page before being processed +# by nroff to edit in the actual inn.conf settings. The problem with this +# approach is that if the format of the inn.conf man page ever changes +# assumptions about the format that this script makes will probably break. +# Presently, the base/orginal format of the inn.conf man page is in perl +# POD documentation. The formating of this file is less likely to change +# in the future and is a cleaner format for automated editing. However, +# their is some disadvantages to using this file. First disadvantage, +# the POD file is not installed by INN 2.3.0 by default (see INNCONFPOD +# enviromental variable for setting the script to find the file in the +# correct location). Second disadvantage, pod2man does not appear to +# support using stdin so the edited POD must be temporarily stored as a +# file. Finally, the last disadvantage, the script is slower due to the +# added processing time of pod2man. Weighing the advantages and +# disadvantages to both approaches are left to the user. If you wish to +# have innconfcheck edit the POD file then change the variable below to +# a setting of "1", otherwise leave it with the setting of "0" +EDITPOD=0 + + ################ + +### The Script: (non-developers should not need to go beyond this point) + +# All variable settings in inn.conf should not contain a comment +# character of "#" and should have a ":" in the line. These variable names +# should then be matched up with the man page "items" in the inn.conf file. +# In the INN 2.3 man page, these items appear in the following format: +# .Ip "\fIvariable name\fR" 4 +# Hence, if there exists an entry in the inn.conf of "verifycancels: false" +# then the awk script will produce: +# s#^.Ip "\fIvarifycancels\f$" 4#.Ip "\verifycancels: false\f$" 4# +# once piped to sed, this expression will replace the man page item to +# include the setting from the inn.conf file. The nroff and pager +# applications then polish the script off to provide a documented formated +# in a way that is easier to find incorrect setting withen. + +if [ $EDITPOD -eq 0 ] ; then + + grep -v "#" $INNCONF | grep ":" | \ + awk 'BEGIN { FS = ":" } { print "s#^.Ip \042\\\\fI"$1"\\\\fR\042 4#.Ip \042\\\\fI"$0"\\\\fR\042 4#" }' | \ + sed -f - $INNCONFMAN | $NROFF | $PAGER + +else + +# The next part is similar to above but provides working from the POD source +# instead of from the resulting nroff/man page. This section is discussed +# in more detail above with the "EDITPOD" setting. + + grep -v "#" $INNCONF | grep ":" | \ + awk 'BEGIN { FS = ":" } { print "s#=item I<"$1">#=item I<"$0">#" }' | \ + sed -f - $INNCONFPOD > /tmp/innconfcheck-$$ + pod2man /tmp/innconfcheck-$$ | $NROFF | $PAGER + rm -f /tmp/innconfcheck-$$ + +fi + +# That's all. +# EOF diff --git a/contrib/makeexpctl.in b/contrib/makeexpctl.in new file mode 100644 index 0000000..320ae1f --- /dev/null +++ b/contrib/makeexpctl.in @@ -0,0 +1,76 @@ +#!/usr/local/bin/perl +# fixscript will replace this line with require innshellvars.pl + +# Create expire.ctl script based on recently read articles. Argument gives +# scale factor to use to adjust expires. + +$readfile="$inn::pathdb/readgroups"; + +$expirectl=$inn::expirectl; +if (open(RDF, $readfile)) { + while () { + chop; + @foo=split(/ /); # foo[0] should be group, foo[1] lastreadtime + if ($foo[1] < $oldtime) { + next; # skip entries that are too old. + } + $groups{$foo[0]} = $foo[1]; + } + close(RDF); +} + +$scale = $ARGV[0]; +if ($scale <= 0) { + die "invalid scale parameter\n"; +} + +rename($expirectl, "$expirectl.OLD") || die "rename $expirectl failed!\n"; +open(OUTFILE, ">$expirectl") || die "open $expirectl for write failed!\n"; + +print OUTFILE <<'EOF' ; +## expire.ctl - expire control file +## Format: +## /remember/: +## :::: +## First line gives history retention; other lines specify expiration +## for newsgroups. Must have a "*:A:..." line which is the default. +## wildmat-style patterns for the newsgroups +## Pick one of M U A -- modifies pattern to be only +## moderated, unmoderated, or all groups +## Mininum number of days to keep article +## Default number of days to keep the article +## Flush article after this many days +## , , and can be floating-point numbers or the +## word "never." Times are based on when received unless -p is used; +## see expire.8 + +# How long to remember old history entries for. +/remember/:2 +# +EOF + +# defaults for most groups. +printline("*", "A", 1); +printline("alt*,misc*,news*,rec*,sci*,soc*,talk*,vmsnet*","U",3); +printline("alt*,misc*,news*,rec*,sci*,soc*,talk*,vmsnet*","M",5); +printline("comp*,gnu*,info*,ok*,ecn*,uok*", "U", 5); +printline("comp*,gnu*,info*,ok*,ecn*,uok*", "M", 7); +# and now handle each group that's regularly read, +# assinging them 3* normal max expire +foreach $i (keys %groups) { + printline($i, "A", 21); +} +# and now put some overrides for groups which are too likely to fill spool if +# we let them go to autoexpire. +printline("*binaries*,*pictures*", "A", 0.5); +printline("control*","A",1); +printline("control.cancel","A",0.5); +printline("news.lists.filters,alt.nocem.misc","A",1); + +close(OUTFILE); +exit(1); + +sub printline { + local($grpstr, $mflag, $len) = @_; + print OUTFILE $grpstr,":",$mflag,":",$len*$scale,":",$len*$scale,":",$len*$scale,"\n"; +} diff --git a/contrib/makestorconf.in b/contrib/makestorconf.in new file mode 100644 index 0000000..c92bc83 --- /dev/null +++ b/contrib/makestorconf.in @@ -0,0 +1,56 @@ +#!/usr/local/bin/perl +# fixscript will replace this line with require innshellvars.pl + +# Create storage.conf script based on recently read articles. + +$readfile="$inn::pathdb/readgroups"; + +$outfile="$inn::pathdb/storage.conf"; +outloop: +for ($level=9 ; $level >= 2; --$level) { + # clear groups hash. + foreach $i (keys %groups) { + delete $groups{$i}; + } + if (open(RDF, "sort $readfile|")) { + while () { + chop; + next if (/^group/); # bogus + @foo=split(/ /); # foo[0] should be group, foo[1] lastreadtime + @bar=split(/\./,$foo[0]); + if ( $level >= scalar @bar) { + $grf = join(".", @bar); + } else { + $grf=join(".", @bar[0..($level-1)]) . ".*"; + } + $groups{$grf} = 1; + } + close(RDF); + } + $grlist = join(",",keys(%groups)); + last outloop if (length($grlist) < 2048); +} + +open(OUT, ">$outfile") || die "cant open $outfile"; +#open(OUT, ">/dev/tty"); + +print OUT <<"EOF" ; +method cnfs { + newsgroups: control,control.* + class: 1 + options: MINI +} + +method timecaf { + newsgroups: $grlist + class: 1 +} + +method cnfs { + newsgroups: * + options: MONGO + class: 0 +} +EOF +close(OUT); +exit(0); diff --git a/contrib/mkbuf b/contrib/mkbuf new file mode 100755 index 0000000..53e326e --- /dev/null +++ b/contrib/mkbuf @@ -0,0 +1,29 @@ +#!/usr/bin/perl + +sub usage { + print STDERR "Usage: $0 \n"; + exit 1; +} + +usage if(@ARGV != 2); + +$buf1k = "\0"x1024; +$buf1m = "$buf1k"x1024; + +$kb = $ARGV[0] * 1; +&usage if($kb == 0); + +if($ARGV[1] eq '-') { + open(FILE, "|cat") or die; +} else { + open(FILE, ">$ARGV[1]") or die; +} + +for($i = 0; $i+1024 <= $kb; $i+=1024) { + print FILE $buf1m or die; +} +if($i < $kb) { + print FILE "$buf1k"x($kb-$i) or die; +} + +close FILE; diff --git a/contrib/mlockfile.c b/contrib/mlockfile.c new file mode 100644 index 0000000..4bf274f --- /dev/null +++ b/contrib/mlockfile.c @@ -0,0 +1,182 @@ +/* $Id: mlockfile.c 6014 2002-12-16 11:28:07Z alexk $ */ + +/* Locks the files given on the command line into memory using mlock. + This code has only been tested on Solaris and may not work on other + platforms. + + Contributed by Alex Kiernan . */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct mlock { + const char *path; + struct stat st; + void *base; + off_t offset; + size_t length; +}; + +char *progname; + +int flush = 0; +int interval = 60000; + +void +inn_lock_files(struct mlock *ml) +{ + for (; ml->path != NULL; ++ml) { + int fd; + + fd = open(ml->path, O_RDONLY); + if (fd == -1) { + fprintf(stderr, "%s: can't open `%s' - %s\n", + progname, ml->path, strerror(errno)); + } else { + struct stat st; + + /* check if size, inode or device of the path have + * changed, if so unlock the previous file & lock the new + * one */ + if (fstat(fd, &st) != 0) { + fprintf(stderr, "%s: can't stat `%s' - %s\n", + progname, ml->path, strerror(errno)); + } else if (ml->st.st_ino != st.st_ino || + ml->st.st_dev != st.st_dev || + ml->st.st_size != st.st_size) { + if (ml->base != MAP_FAILED) + munmap(ml->base, + ml->length ? ml->length : ml->st.st_size); + + /* free everything here, so in case of failure we try + * again next time */ + ml->st.st_ino = 0; + ml->st.st_dev = 0; + ml->st.st_size = 0; + + ml->base = mmap(NULL, + ml->length ? ml->length : st.st_size, + PROT_READ, + MAP_SHARED, fd, ml->offset); + + if (ml->base == MAP_FAILED) { + fprintf(stderr, "%s: can't mmap `%s' - %s\n", + progname, ml->path, strerror(errno)); + } else { + if (mlock(ml->base, + ml->length ? ml->length : st.st_size) != 0) { + fprintf(stderr, "%s: can't mlock `%s' - %s\n", + progname, ml->path, strerror(errno)); + } else { + ml->st = st; + } + } + } else if (flush) { + msync(ml->base, ml->length ? ml->length : st.st_size, MS_SYNC); + } + } + close (fd); + } +} + +static void +usage(void) +{ + fprintf(stderr, + "usage: %s [-f] [-i interval] file[@offset[:length]] ...\n", + progname); + fprintf(stderr, " -f\tflush locked bitmaps at interval\n"); + fprintf(stderr, " -i interval\n\tset interval between checks/flushes\n"); +} + +int +main(int argc, char *argv[]) +{ + struct mlock *ml; + int i; + + progname = *argv; + while ((i = getopt(argc, argv, "fi:")) != EOF) { + switch (i) { + case 'i': + interval = 1000 * atoi(optarg); + break; + + case 'f': + flush = 1; + break; + + default: + usage(); + return EX_USAGE; + } + } + argc -= optind; + argv += optind; + + /* construct list of pathnames which we're to operate on, zero out + * the "cookies" so we lock it in core first time through */ + ml = malloc((1 + argc) * sizeof ml); + for (i = 0; argc--; ++i, ++argv) { + char *at; + off_t offset = 0; + size_t length = 0; + + ml[i].path = *argv; + ml[i].st.st_ino = 0; + ml[i].st.st_dev = 0; + ml[i].st.st_size = 0; + ml[i].base = MAP_FAILED; + + /* if we have a filename of the form ...@offset:length, only + * map in that portion of the file */ + at = strchr(*argv, '@'); + if (at != NULL) { + char *end; + + *at++ = '\0'; + errno = 0; + offset = strtoull(at, &end, 0); + if (errno != 0) { + fprintf(stderr, "%s: can't parse offset `%s' - %s\n", + progname, at, strerror(errno)); + return EX_USAGE; + } + if (*end == ':') { + at = end + 1; + errno = 0; + length = strtoul(at, &end, 0); + if (errno != 0) { + fprintf(stderr, "%s: can't parse length `%s' - %s\n", + progname, at, strerror(errno)); + return EX_USAGE; + } + } + if (*end != '\0') { + fprintf(stderr, "%s: unrecognised separator `%c'\n", + progname, *end); + return EX_USAGE; + } + } + ml[i].offset = offset; + ml[i].length = length; + } + ml[i].path = NULL; + + /* loop over the list of paths, sleeping 60s between iterations */ + for (;;) { + inn_lock_files(ml); + poll(NULL, 0, interval); + } + return EX_OSERR; +} diff --git a/contrib/newsresp.c b/contrib/newsresp.c new file mode 100644 index 0000000..b2931b7 --- /dev/null +++ b/contrib/newsresp.c @@ -0,0 +1,297 @@ +/* newsresp.c - EUnet - bilse */ + +/* + * From: Koen De Vleeschauwer + * Subject: Re: innfeed-users: innfeed: measuring server response time + * To: jeff.garzik@spinne.com (Jeff Garzik) + * Date: Tue, 13 May 1997 16:33:27 +0200 (MET DST) + * Cc: innfeed-users@vix.com + * + * > Is there an easy way to measure server response time, and print it out + * > on the innfeed status page? Cyclone's nntpTime measures login banner + * > response time and an article add and lookup operation. + * > + * > It seems to me that innfeed could do something very similar. It could + * > very easily sample gettimeofday() or Time.Now to determine a remote + * > server's average response time for lookups, lookup failures, article + * > send throughput, whatever. + * > + * > These statistics might be invaluable to developers creating advanced + * > connection and article delivery algorithms. If I knew, for example, + * > that a site's article send/save throughput was really fast, but history + * > lookups were really slow, my algorithm could reserve a channel or two + * > for TAKETHIS-only use. + * + * We use a stand-alone program which opens up an additional nntp channel + * from time to time and takes a peek at the various response times. + * It's also interesting to tune one's own box. + * I've included the source code; please consider this supplied 'as is'; + * bugs and features alike. SunOS, Solaris and Irix ought to be ok; + * eg. gcc -traditional -o newsresp ./newsresp.c -lnsl -lsocket on S0laris. + * If a host has an uncommonly long banner you may have to change a constant + * somewhere; forget. Please note one has to interpret the output; + * eg. whether one is measuring rtt or history lookup time. + * + * Basic usage is: + * news 1 % newsresp -n 5 news.eu.net + * --------------------------------- + * news.eu.net is 134.222.90.2 port 119 + * elap diff + * 0.0 0.0 Connecting ... + * 0.0 0.0 OK, waiting for prompt + * 0.0 0.0 <<< 200 EU.net InterNetNews server INN 1.5.1 17-Dec-1996 re [...] + * 0.0 0.0 >>> ihave <244796399@a> + * 0.0 0.0 <<< 335 + * 0.0 0.0 >>> . + * 0.0 0.0 <<< 437 Empty article + * 0.0 0.0 >>> ihave <244796398@a> + * 0.0 0.0 <<< 335 + * 0.0 0.0 >>> . + * 0.0 0.0 <<< 437 Empty article + * 0.0 0.0 >>> ihave <244796397@a> + * 0.0 0.0 <<< 335 + * 0.0 0.0 >>> . + * 0.0 0.0 <<< 437 Empty article + * 0.0 0.0 >>> ihave <244796396@a> + * 0.1 0.0 <<< 335 + * 0.1 0.0 >>> . + * 0.1 0.0 <<< 437 Empty article + * 0.1 0.0 >>> ihave <244796395@a> + * 0.1 0.0 <<< 335 + * 0.1 0.0 >>> . + * 0.1 0.0 <<< 437 Empty article + * 0.1 0.0 >>> quit + * 0.1 0.0 <<< 205 . + */ + +#include +#include +#include +#include +#include +#include +#include + +#define NNTPPORT 119 +struct sockaddr_in sock_in; +int sock; +char buf[1024]; + +main(argc,argv) +int argc; +char *argv[]; +{ + int errflg = 0, c; + extern char *optarg; + extern int optind; + struct hostent *host; + unsigned long temp; + unsigned numart = 1; + struct protoent *tcp_proto; + char **whoP; + + while ( (c = getopt(argc,argv,"n:")) != -1 ) + switch ( c ) { + case 'n': sscanf(optarg,"%u",&numart); break; + default : errflg++; + } + if ( numart == 0 || optind == argc ) + errflg++; + if ( errflg ) { + fprintf(stderr,"Usage: %s [-n articles] host ...\n",argv[0]); + exit(1); + } + + if ( (tcp_proto = getprotobyname("tcp")) == 0 ) + fatal("getprotobyname"); + for ( whoP = argv+optind; *whoP != 0; whoP++ ) { + if ( (sock = socket(PF_INET,SOCK_STREAM,tcp_proto->p_proto)) < 0 ) + fatal("socket"); + temp = inet_addr(*whoP); + if ( temp != (unsigned long) -1 ) { + sock_in.sin_addr.s_addr = temp; + sock_in.sin_family = AF_INET; + } + else { + host = gethostbyname(*whoP); + if ( host ) { + sock_in.sin_family = host->h_addrtype; + memcpy(&sock_in.sin_addr,host->h_addr,host->h_length); + } + else { + fprintf(stderr,"gethostbyname can't find %s\n",*whoP); + exit(1); + } + } + sock_in.sin_port = htons(NNTPPORT); + printf("---------------------------------\n%s is %s port %d\n", + *whoP,inet_ntoa(sock_in.sin_addr),ntohs(sock_in.sin_port)); + punt(numart); + close(sock); + } +} + +error(what) +char *what; +{ + ptime(); fflush(stdout); + perror(what); +} + +fatal(what) +char *what; +{ + error(what); + exit(2); +} + +ierror(how,what) +char *how, *what; +{ + printf("Expected %s, bailing out.\n",how); +} + +ifatal(how,what) +char *how, *what; +{ + ierror(how,what); + exit(1); +} + +unsigned do_time(start) +unsigned start; +{ + struct timeval now; + + gettimeofday(&now,(struct timezone *)0); + return ( now.tv_sec*1000 + now.tv_usec/1000 - start ); +} + + +unsigned start, elapsed, diff; + +ptime() +{ + diff = elapsed; + elapsed = do_time(start); + diff = elapsed - diff; + printf("%5.1f %5.1f ",((float)elapsed)/1000.0,((float)diff)/1000.0); +} + +massagebuff(bread,buf) +int bread; +char *buf; +{ + char *p; + + if ( bread > 55 ) + strcpy(buf+55," [...]\n"); + else + buf[bread] = '\0'; + for ( p = buf; *p != '\0'; ) + if ( *p != '\r' ) /* We like to do it RISC style. */ + p++; + else { + *p = ' '; + p++; + } +} + +punt(numart) +int numart; +{ + static char ihave[32], + dot[] = ".\r\n", + quit[] = "quit\r\n"; + struct timeval start_tv; + int bread; + + printf(" elap diff\n"); + diff = elapsed = 0; + gettimeofday(&start_tv,(struct timezone *)0); + start = start_tv.tv_sec*1000 + start_tv.tv_usec/1000; + + ptime(); + printf("Connecting ...\n"); + if ( connect(sock,(struct sockaddr*)&sock_in,sizeof(sock_in)) < 0 ) { + error("connect"); + return(-1); + } + ptime(); + printf("OK, waiting for prompt\n"); + + if ( (bread=read(sock,buf,sizeof(buf))) < 0 ) { + error("read socket"); + return(-1); + } + massagebuff(bread,buf); + ptime(); + printf("<<< %s",buf); + if ( strncmp(buf,"200",3) != 0 && strncmp(buf,"201",3) != 0 ) { + ierror("200 or 201",buf); + return(-1); + } + + do { + snprintf(ihave,sizeof(ihave),"ihave <%u@a>\r\n",start+numart); + ptime(); + printf(">>> %s",ihave); + if ( write(sock,ihave,strlen(ihave)) != strlen(ihave) ) { + error("write socket"); + return(-1); + } + + if ( (bread=read(sock,buf,sizeof(buf))) < 0 ) { + error("read socket"); + return(-1); + } + massagebuff(bread,buf); + ptime(); + printf("<<< %s",buf); + if ( strncmp(buf,"335",3) != 0 && strncmp(buf,"435",3) != 0 ) { + ierror("335 or 435 ",buf); + return(-1); + } + + if ( strncmp(buf,"335",3) == 0 ) { + ptime(); + printf(">>> %s",dot); + if ( write(sock,dot,sizeof(dot)-1) != sizeof(dot)-1 ) { + error("write socket"); + return(-1); + } + + if ( (bread=read(sock,buf,sizeof(buf))) < 0 ) { + error("read socket"); + return(-1); + } + massagebuff(bread,buf); + ptime(); + printf("<<< %s",buf); + if ( strncmp(buf,"437",3) != 0 && strncmp(buf,"235",3) != 0 ) { + ierror("437 or 235",buf); + return(-1); + } + } + } while ( --numart != 0 ); + + ptime(); + printf(">>> %s",quit); + if ( write(sock,quit,sizeof(quit)-1) != sizeof(quit)-1 ) { + error("write socket"); + return(-1); + } + + if ( (bread=read(sock,buf,sizeof(buf))) < 0 ) { + error("read socket"); + return(-1); + } + massagebuff(bread,buf); + ptime(); + printf("<<< %s",buf); + if ( strncmp(buf,"205",3) != 0 ) { + ierror("205",buf); + return(-1); + } + return(0); +} diff --git a/contrib/pullart.c b/contrib/pullart.c new file mode 100644 index 0000000..6b06099 --- /dev/null +++ b/contrib/pullart.c @@ -0,0 +1,297 @@ +/* +June 14, 1999 + +Recover text articles from cyclic buffers +Articles start with "\0Path:" +and end with "\r\n.\r\n" + +Tested with INND 2.2 under AIX 4.2 + +rifkin@uconn.edu +*/ +/* +(1) Pull 16 bytes at a time +(2) Last 7 bytes must be \000\000\000Path +(3) When found, print "\nPath"; +(4) print subsequent bytes until \r\n.\r\n found +*/ + +#include "config.h" +#include +#include +#include + +#define INFILE 1 +#define FILEPREFIX 2 +#define HEADER 3 +#define STRING 4 + +/* String buffer size */ +#define NBUFF 512 + +#define MAX_ART_SIZE 2200000 + + +#define WRITEMSG printf ("File %s line %i\n", __FILE__, __LINE__); \ + fflush(stdout); + +#define WRITEVAR(VAR_NAME,VAR_TYPE) \ + { \ + printf ("FILE %s LINE %i :", __FILE__, __LINE__); \ + printf ("%s = ", #VAR_NAME); \ + printf (#VAR_TYPE, (VAR_NAME) ); \ + printf ("\n"); \ + } + +#define WRITETXT(TEXT) \ + printf ("FILE %s LINE %i \"%s\"\n", __FILE__, __LINE__, TEXT); \ + fflush(stdout); + +#if 0 +#define WRITEMSG +#define WRITEVAR(X,Y) +#endif + + +int WriteArticle (char *, int, char *, char *, char *, int); + + +char ArtHead[7] = {0, 0, 0, 'P', 'a', 't', 'h'}; +char ArtTail[5] = {'\r', '\n', '.', '\r', '\n'}; +int LenTail = 5; + +int main (int argc, char *argv[]) + { + FILE *Infile; + int NumTailCharFound; + bool ReadingArticle = false; + char buffer[32]; + char *obuffer = NULL; + char *header = NULL; + char *string = NULL; + int osize = MAX_ART_SIZE; + int opos = 0; + int i; + int nchar; + int fileno = 0; + int artno = 0; + + /* Check number of args */ + if (argc<3) + { + printf ("Usage: pullart [
]\n"); + printf (" Read cycbuffer and print all articles whose\n"); + printf (" article header
contains .\n"); + printf (" Articles are written to files name .nnnnnn\n"); + printf (" where nnnnnn is numbered sequentially from 0.\n"); + printf (" If
and not specified, all articles\n"); + printf (" are written.\n"); + printf (" Examples:\n"); + printf (" pullart /news3/cycbuff.3 alt.rec Newsgroup: alt.rec\n"); + printf (" pullart /news3/cycbuff.3 all\n"); + printf (" pullart firstbuff article Subject bluejay\n"); + return 0; + } + + /* Allocate output buffer */ + obuffer = (char *) calloc (osize+1, sizeof(char)); + if (obuffer==NULL) + { + printf ("Cannot allocate obuffer[]\n"); + return 1; + } + + + /* Open input file */ + Infile = fopen (argv[INFILE], "rb"); + if (Infile==NULL) + { + printf ("Cannot open input file.\n"); + return 1; + } + + +if (argc>=4) header = argv[HEADER]; +if (argc>=5) string = argv[STRING]; +if (*header=='\0') header=NULL; +if (*string=='\0') string=NULL; + +/*test*/ +printf ("filename <%s>\n", argv[INFILE]); +printf ("fileprefix <%s>\n", argv[FILEPREFIX]); +printf ("header <%s>\n", header); +printf ("string <%s>\n", string); + + + /* Skip first 0x38000 16byte buffers */ + i = fseek (Infile, 0x38000L, SEEK_SET); + + /* Read following 16 byte buffers */ + ReadingArticle = false; + NumTailCharFound = 0; + nchar=0; + artno=0; + while ( 0!=fread(buffer, 16, 1, Infile) ) + { + + nchar+=16; + + /* Found start of article, start writing to obuffer */ + if (0==memcmp(buffer+9, ArtHead, 7)) + { + ReadingArticle = true; + memcpy (obuffer, "Path", 4); + opos = 4; + continue; + } + + /* Currnetly reading article */ + if (ReadingArticle) + { + for (i=0; i<16; i++) + { + + /* Article too big, drop it and move on */ + if (opos>=osize) + { + printf + ("article number %i bigger than buffer size %i.\n", + artno+1, osize); + artno++; + ReadingArticle=false; + break; + } + + /* Add current character to output buffer, but remove \r */ + if ('\r' != buffer[i]) + obuffer[opos++] = buffer[i]; + + /* Check for article ending sequence */ + if (buffer[i]==ArtTail[NumTailCharFound]) + { + NumTailCharFound++; + } + else + NumTailCharFound=0; + + /* End found, write article, reset for next */ + if (NumTailCharFound==LenTail) + { + ReadingArticle = false; + NumTailCharFound = 0; + + /* Add trailing \0 to buffer */ + obuffer[opos+1] = '\0'; + + fileno += WriteArticle + (obuffer, opos, argv[FILEPREFIX], + header, string, fileno); + artno++; + break; + } + } + + } + + } + + close (Infile); + + return 0; + } + + + +/* +Writes article stored in buff[] if it has a +"Newsgroups:" header line which contains *newsgroup +Write to a file named fileprefix.fileno +*/ +int +WriteArticle +(char *buff, int n, char *fileprefix, char *headerin, char *string, int fileno) + { + char *begptr; + char *endptr; + char *newsptr; + char savechar; + char header[NBUFF]; + char filename[NBUFF]; + FILE *outfile; + + + /* Prevent buffer overflow due to fileprefix too long */ + if (strlen(fileprefix)>384) + { + printf + ("program error: cannot have file prefix greater then 384 characters\n"); + exit(1); + } + + /* + Is header here? Search if header string requested, leave if not found + */ + if (headerin!=NULL) + { + /* Find \nHEADER */ + strlcpy(header, "\n", sizeof(header)); + strlcat(header, headerin, sizeof(header)); + + begptr = strstr (buff, header); + + /* return if Header name not found */ + if (begptr==NULL) + { + return 0; + } + + /* + Header found. What about string? + Search if string requested, leave if not found + */ + if (string!=NULL) + { + /* Find end of header line */ + begptr++; + endptr = strchr (begptr, '\n'); + + /* Something is wrong, end of header not found, do not write + * article + */ + if (endptr==NULL) + return 0; + + /* Temporarily make string end a null char */ + savechar = *endptr; + *endptr = '\0'; + newsptr = strstr (begptr, string); + + /* Requested newsgroup not found */ + if (newsptr==NULL) + return 0; + + /* Restore character at end of header string */ + *endptr = savechar; + } + /* No string specified */ + + } + /* No header specified */ + + /* Open file, write buffer, close file */ + snprintf (filename, sizeof(filename), "%s.%06i", fileprefix, fileno); + + outfile = fopen (filename, "wt"); + if (outfile==NULL) { + printf ("Cannot open file name %s\n", filename); + exit(1); + } + + while (n--) + fprintf (outfile, "%c", *buff++); + + close (outfile); + + /* Return number of files written */ + return 1; + } diff --git a/contrib/reset-cnfs.c b/contrib/reset-cnfs.c new file mode 100644 index 0000000..18bb334 --- /dev/null +++ b/contrib/reset-cnfs.c @@ -0,0 +1,56 @@ +/* Quick and Dirty Hack to reset a CNFS buffer without having to DD the + * Entire Thing from /dev/zero again. */ +#include +#include +#include +#include + +#include + +/* uncomment the below for LARGE_FILES support */ +/* #define LARGE_FILES */ + +int main(int argc, char *argv[]) +{ + int fd; + int i, j; + char buf[512]; +#ifdef LARGE_FILES + struct stat64 st; +#else + struct stat st; +#endif + int numwr; + + bzero(buf, sizeof(buf)); + for (i = 1; i < argc; i++) { +#ifdef LARGE_FILES + if ((fd = open(argv[i], O_LARGEFILE | O_RDWR, 0664)) < 0) +#else + if ((fd = open(argv[i], O_RDWR, 0664)) < 0) +#endif + fprintf(stderr, "Could not open file %s: %s\n", argv[i], strerror(errno)); + else { +#ifdef LARGE_FILES + if (fstat64(fd, &st) < 0) { +#else + if (fstat(fd, &st) < 0) { +#endif + fprintf(stderr, "Could not stat file %s: %s\n", argv[i], strerror(errno)); + } else { + /* each bit in the bitfield is 512 bytes of data. Each byte + * has 8 bits, so calculate as 512 * 8 bytes of data, plus + * fuzz. buf has 512 bytes in it, therefore containing data for + * (512 * 8) * 512 bytes of data. */ + numwr = (st.st_size / (512*8) / sizeof(buf)) + 50; + printf("File %s: %u %u\n", argv[i], st.st_size, numwr); + for (j = 0; j < numwr; j++) { + if (!(j % 100)) + printf("\t%d/%d\n", j, numwr); + write(fd, buf, sizeof(buf)); + } + } + close(fd); + } + } +} diff --git a/contrib/respool.c b/contrib/respool.c new file mode 100644 index 0000000..71ed2bd --- /dev/null +++ b/contrib/respool.c @@ -0,0 +1,95 @@ +/* +** Refile articles into the storage manager under the current storage.conf +** rules, deleting articles from their old place in the spool. +** Written 10-09-99 by rmtodd@servalan.servalan.com +** +** Note that history and overview will have to be rebuilt for the moved +** articles to be visible after they're moved. +*/ + +/* include foo needed by libinn/storage manager */ +#include "config.h" +#include "clibrary.h" +#include + +#include "inn/innconf.h" +#include "inn/qio.h" +#include "libinn.h" +#include "paths.h" +#include "storage.h" + +char *ME; + +static void +ProcessLine(char *line) +{ + char *tokenptr; + int len; + ARTHANDLE *art; + ARTHANDLE newart; + TOKEN token, newtoken; + char *arttmp; + time_t arrived; + + tokenptr = line; + + /* zap newline at end of tokenptr, if present. */ + len = strlen(tokenptr); + if (tokenptr[len-1] == '\n') { + tokenptr[len-1] = '\0'; + } + + token = TextToToken(tokenptr); + if ((art = SMretrieve(token, RETR_ALL)) == NULL) return; + + len = art->len; + arrived = art->arrived; + arttmp = xmalloc(len); + memcpy(arttmp, art->data, len); + SMfreearticle(art); + if (!SMcancel(token)) { + fprintf(stderr, "%s: cant cancel %s:%s\n", ME, tokenptr, SMerrorstr); + return; + } + + newart.data = arttmp; + newart.len = len; + newart.arrived = (time_t) 0; /* set current time */ + newart.token = (TOKEN *)NULL; + + newtoken = SMstore(newart); + if (newtoken.type == TOKEN_EMPTY) { + fprintf(stderr, "%s: cant store article:%s\n", ME, SMerrorstr); + return; + } + free(arttmp); + printf("refiled %s ",TokenToText(token)); + printf("to %s\n", TokenToText(newtoken)); + return; +} + +int +main(int argc UNUSED, char *argv[]) +{ + bool one = true; + char buff[SMBUF]; + + ME = argv[0]; + + if (!innconf_read(NULL)) + exit(1); + + if (!SMsetup(SM_PREOPEN, &one) || !SMsetup(SM_RDWR, (void *)&one)) { + fprintf(stderr, "can't init storage manager"); + exit(1); + } + if (!SMinit()) { + fprintf(stderr, "Can't init storage manager: %s", SMerrorstr); + } + while (fgets(buff, SMBUF, stdin)) { + ProcessLine(buff); + } + printf("\nYou will now need to rebuild history and overview for the moved" + "\narticles to be visible again.\n"); + exit(0); +} diff --git a/contrib/sample.init.script b/contrib/sample.init.script new file mode 100644 index 0000000..104ca48 --- /dev/null +++ b/contrib/sample.init.script @@ -0,0 +1,18 @@ +#!/sbin/sh + +# This is a simple, bare-bones example of a SysV-style init.d script for INN. + +case $1 in + +start) + su news -c /usr/local/news/bin/rc.news + ;; + +stop) + su news -c '/usr/local/news/bin/rc.news stop' + ;; + +esac + +exit 0 + diff --git a/contrib/showtoken.in b/contrib/showtoken.in new file mode 100644 index 0000000..4f513ce --- /dev/null +++ b/contrib/showtoken.in @@ -0,0 +1,95 @@ +#!/usr/bin/perl -w +# showtoken - decode SM tokens +# Olaf Titz, 1999. Marco d'Itri, 2000. Public domain. +# Takes tokens on stdin and write them along with a decoded form on stdout. + +use strict; + +my ($pathspool, %NG); + +my @types = ('trash', '', 'timehash', 'cnfs', 'timecaf', 'tradspool'); + +if ($ARGV[0]) { + $pathspool = $ARGV[0]; + if (open(MAP, "$pathspool/tradspool.map")) { + while () { + my ($ng, $gnum) = split; + $NG{$gnum} = $ng; + } + close MAP; + } +} + +$| = 1; +while () { + chomp; + next if not /^@.+@/; + print "$_ "; + splittoken($_); +} + +sub splittoken { + my $t = shift; + + $t =~ tr/@//d; + $t = pack('H*', $t); + my ($type, $class, $token, $index, $offset, $overlen, $cancelled) = + unpack('C C a16 CLnc', $t); + + if (not $types[$type]) { + print "type=$type unknown!\n"; + next; + } + print "type=$types[$type] class=$class "; + + if ($type == 0) { # trash + } elsif ($type == 2) { # timehash + my ($time, $seq) = unpack('Nn', $token); + my ($a, $b, $c, $d) = unpack('CCCC', $token); + printf 'time=%08lX seq=%04X file=time-%02x/%02x/%02x/%04x-%02x%02x', + $time, $seq, $class, $b, $c, $seq, $a, $d; + } elsif ($type == 3) { # cnfs + my ($buffn, $offset, $cnum) = unpack('A8NN', $token); + printf 'buffer=%s offset=%x cycnum=%x', $buffn, $offset * 512, $cnum; + } elsif ($type == 4) { # timecaf + my ($time, $seq) = unpack('Nn', $token); + my (undef, $b, $c, $d) = unpack('CCCC', $token); + printf 'time=%06lX seq=%04X caf=timecaf-%02x/%02x/%02x%02x.CF', + $time, $seq, $class, $c, $b, $d; + } elsif ($type == 5) { # tradspool + my ($gnum, $art) = unpack('NN', $token); + printf 'ng=%08X art=%d', $gnum, $art; + print "file=articles/$NG{$gnum}/$art" if $NG{$gnum}; + } else { + die "invalid type $type"; + } + print " over=$index offset=$offset overlen=$overlen cancelled=$cancelled" + if length $t > 36; + print "\n"; +} +__END__ +# Format of a token: +# 1 type +# 1 class +# 16 token +# 1 index +# 4 offset +# 2 overlen +# 2 cancelled +# The fields "index" and following are not available with OV3 (INN 2.3 up) +# +# the "token" field is: +# for type=0 (trash) ignored +# for type=2 (timehash) +# 4 time +# 2 seqnum +# for type=3 (cnfs) +# 8 cycbuffname +# 4 offset/512 +# 4 cycnum +# for type=4 (timecaf) +# 4 time +# 2 seqnum +# for type=5 (tradspool) +# 4 ngnum +# 4 artnum diff --git a/contrib/stathist.in b/contrib/stathist.in new file mode 100644 index 0000000..c34c911 --- /dev/null +++ b/contrib/stathist.in @@ -0,0 +1,79 @@ +#!/usr/bin/perl -w + +# Parse log files created by innd history profiler +# 2001/01/29 - Fabien Tassin + +use strict; +use FileHandle; + +my $file = shift || "stathist.log"; +if ($file eq '-h' || $file eq '--help') { + print "Usage: stathist [logfile]\n"; + exit 0; +} + +sub parse { + my $file = shift; + + my $f = new FileHandle $file; + unless (defined $f) { + print STDERR "Can't open file: $!\n"; + return {}; + } + my $data = {}; + my $begin = 1; + my @stack = (); + while (defined (my $line = <$f>)) { + next if $begin && $line !~ / HIS(havearticle|write|setup) begin/; + $begin = 0; + chomp $line; + my @c = split /[\[\]\(\) ]+/, $line; + ($c[4] eq 'begin') && do { + push @stack, $c[3]; + my $d = $data; + for my $l (@stack) { + unless (defined $$d{$l}) { + $$d{$l}{'min'} = 1E10; + $$d{$l}{'total'} = $$d{$l}{'count'} = $$d{$l}{'max'} = 0; + } + $d = $$d{$l} + } + } || + ($c[4] eq 'end') && do { + my $d = $data; + for my $l (@stack) { + $d = $$d{$l}; + } + $$d{'count'}++; + $$d{'total'} += $c[5]; + $$d{'min'} = $c[5] if $$d{'min'} > $c[5]; + $$d{'max'} = $c[5] if $$d{'max'} < $c[5]; + pop @stack; + }; + } + $f->close; + $data; +} + +sub report { + my $data = shift; + my $inc = shift; + + unless (defined $inc) { + printf "%-16s %10s %14s %10s %10s %10s\n\n", "Function", "Invoked", + "Total(s)", "Min(ms)", "Avg(ms)", "Max(ms)"; + $inc = 0; + } + + for my $key (sort keys %$data) { + next unless $key =~ m/^HIS/; + printf "%-16s %10d %14.6f %10.3f %10.3f %10.3f\n", (' ' x $inc) . $key, + $$data{$key}{'count'}, $$data{$key}{'total'}, $$data{$key}{'min'} * 1000, + $$data{$key}{'total'} / $$data{$key}{'count'} * 1000, + $$data{$key}{'max'} * 1000; + &report($$data{$key}, $inc + 1) + } +} + +my $data = &parse($file); +&report($data); diff --git a/contrib/thdexpire.in b/contrib/thdexpire.in new file mode 100644 index 0000000..93b09bf --- /dev/null +++ b/contrib/thdexpire.in @@ -0,0 +1,647 @@ +#!/usr/bin/perl -w +# fixscript will replace this line with require innshellvars.pl +$ID='$Id: thdexpire.in 4572 2001-02-24 22:31:05Z rra $$'; + +use POSIX ":fcntl_h"; +use SDBM_File; +use Getopt::Std; + +# With the -M switch this program installs its own man page. +#----------------------------------------------------------------------------- + +=head1 NAME + +thdexpire - dynamic expire daemon for timehash and timecaf storage + +=head1 SYNOPSIS + +B +[ B<-t> I ] +[ B<-f> I ] +[ B<-i> I ] +[ B<-m> I ] +[ B<-x> I ] +[ B<-N> ] +[ B<-v> I ] + +B + +=head1 DESCRIPTION + +This is a daemon, to be started along with B, which periodically +looks if news spool space is getting tight, and frees space by removing +articles until enough is free. It is an adjunct (not a replacement) to +INNs B program. + +=head2 Setting Up + +=over 4 + +=item 1. + +Configure your storage classes carefully. Let the default go in class +100 and choose the storage classes as relative (percent) retention +times. E.g. if you want to give C a fifth of the +default time, put them in class 20. Storage classes above 200 are +ignored by this program. 0 expires immediately. An example is given +in L<"EXAMPLES">. + +=item 2. + +Set up your F in a way that it puts only a maximum cap on +retention times. Run B from B as usual. However, +it should only expire articles which have an Expires line or are in +classes above 200. See L<"EXAMPLES">. + +=item 3. + +Ensure to start this daemon along with B. + +=item 4. + +To get information and statistics, run B (in parallel to +a running daemon). This will show you the current actual retention +times. + +=back + +=head2 How It Works + +B works directly on the spool. It assumes the layout +described in the timehash and timecaf sections of L as of +INN-2.x-CURRENT (Dec. 5, 1998). For every storage class associated +with timehash/timecaf, B keeps a I which is the +modification time of the oldest article/CAF file in this class. This +time is chosen so that the difference of the work time of class N to +now (i.e. the I for class N) will be N/100 of the +retention time of class 100. The work time of all classes is +continuously adjusted as time goes by. Articles and CAF files which +are older than the work time are deleted. + +=head1 OPTIONS + +=over 8 + +=item B<-t> I + +Check for free space every I minutes (default 30). + +=item B<-f> I + +Leave I kilobytes of free disk space on each spool +filesystem (default 50000). + +=item B<-i> I + +Leave I inodes free on each spool filesystem (default 5000). + +=item B<-m> I + +Set the minimum normal holding time for class 100 to I days +(default 7). + +=item B<-x> I + +Set the absolute minimum holding time for any article to I +seconds (default 86400, i.e. 1 day). + +=item B<-N> + +Do not delete any articles, just print what would be done. + +=item B<-v> I + +Set the verbosity level. Values from 1 to 3 are meaningful, where +higher levels are mostly for debugging. + +=item B<-r> + +Do not run as a daemon, instead print a report from the database (see +L) on the available storage classes, current expire times and +other stuff. + +=back + +=head1 EXAMPLES + +Here is an example F file: + + # Large postings in binary groups are expired fast: + # 20% retention time + method timehash { + newsgroups: *.binaries.*,*.binaer.*,*.dateien.*,alt.mag.* + size: 30000 + class: 20 + } + + # Local groups and *.answers groups don't expire at all with + # thdexpire. These are handled by Expires lines and a cutoff + # in expire.ctl. + method timehash { + newsgroups: *.answers,news.announce.*,local.* + class: 201 + } + + # Expires lines are honored if they dont exceed 90 days. + # Exempt those postings from thdexpire handling. + method timehash { + newsgroups: * + expires: 1d,90d + class: 202 + } + + # Default: should be class 100 because thdexpire bases its + # calculations thereupon. + method timecaf { + newsgroups: * + class: 100 + } + +And here is an F which fits: + + # Our local groups are held 6 months + local.*:A:7:180:180 + # Everything else is handled by thdexpire, or Expires lines + *:A:7:never:never + +Note that B does not actually use these files, they just +configure other parts of the news system in an appropriate way. + +=head1 FILES + +=over 4 + +=item Finn::pathdbE/thdexpstat.{dir,pag}> + +Holds state information like classes, expire times, oldest articles. +When this file is missing, it will be rebuilt the next time the daemon +is started, which basically means scanning the spool directories to +find the oldest articles. With the B<-r> option, the contents of this +file are printed. + +=item Finn::innddirE/thdexpire.pid> + +Contains the PID of the running daemon. + +=back + +=head1 SIGNALS + +I or I can be sent to the daemon at any time, causing +it to gracefully exit immediately. + +=head1 SEE ALSO + +L, L, L + +=head1 NOTES + +This version needs the B program supplied with newer releases of INN. + +The filenames for timecaf were wrong in older versions of the INN +documentation. This program uses the true filenames, as found by +reading the INN source. + +=head1 DIAGNOSTICS + +Any error messages are printed on standard error. Normal progress +messages, as specified by the B<-v> option, are printed on standard +output. + +=head1 BUGS + +Storage classes which are in I but not on disk (i.e. +which have never been filed into) when the daemon starts are ignored. + +The code is ugly and uses too many global variables. +Should probably rewrite it in C. + +=head1 RESTRICTIONS + +Directories which are left empty are not removed. + +The overview database is not affected by B, it has to be +cleaned up by the daily regular B run. This may need a +patch to B. + +=head1 AUTHOR + +Olaf Titz . Use and distribution of this work is +permitted under the same terms as the B package. + +=head1 HISTORY + +Inspired by the old B program for the traditional spool. + +June 1998: wrote the first version for timehash. + +November 1998: added code for timecaf, works on multiple spool +filesystems, PODed documentation. + +July 1999: bugfixes. + +=cut + +#----------------------------------------------------------------------------- + +chdir $inn::spool || die "chdir $inn::spool: $!"; +$opt_r=0; # make a report +$opt_t=30; # check interval in minutes +$opt_f=50000; # required space in kilobytes +$opt_i=5000; # required space in inodes +$opt_m=7; # minimum normal (class 100) time in days +$opt_x=86400; # absolute minimum hold time in seconds +$opt_N=0; # dont actually delete articles +$opt_v=0; # verbosity level +$opt_M=0; # install man page +getopts("rt:f:i:m:x:Nv:M"); + +$_=$inn::pathdb; $_=$inn::pathnews; # shut up warning +$sfile="$inn::pathdb/thdexpstat"; +$ID=~/ ([^,]+,v [^ ]+)/; $ID=$1; + +if ($opt_M) { + print "Installing thdexpire(8) man page\n"; + $0=~m:^(.*)/([^/]+)$:; + chdir $1 || die "chdir $1"; + exec "pod2man --section=8 --center='Contributed News Software'" . + " --release='$ID' $2 >$inn::pathnews/man/man8/thdexpire.8"; +} + +if ($opt_r) { + tie(%S, SDBM_File, $sfile, O_RDONLY, 0664) || die "open $sfile: $!"; + &report; + untie %S; + exit 0; +} + +(system "shlock", "-p", $$, "-f", "$inn::innddir/thdexpire.pid")>>8==0 + || die "Already running"; +tie(%S, SDBM_File, $sfile, O_RDWR|O_CREAT, 0664) || die "open $sfile: $!"; +$SIG{'TERM'}=$SIG{'INT'}='finish'; +$|=1; +printf "%s starting at %s\n", $ID, &wtime(time) if ($opt_v>0); + +undef @c; +$NOW=time; $ac=$cc=0; +opendir(CD, ".") || &err("opendir $inn::spool: $!"); +while ($cd=readdir(CD), defined($cd)) { + $cd=~/^time(caf)?-([0-9a-f][0-9a-f])$/i || next; + $c{hex($2)}=1 unless hex($2)>200; +} +closedir CD; +@classes=sort {$a<=>$b} keys %c; +foreach $c (@classes) { + &initclass($c); + $S{"work$;$c"}=$S{"oldest$;$c"}&0xFFFFFF00; +} + +$S{"classes"}=join(",", @classes); +$S{"inittime"}=time; +$S{"ID"}=$ID; +printf "Checked %d articles, %d CAFs in %d seconds\n", $ac, $cc, time-$NOW + if ($ac+$cc>0 && $opt_v>0); + +chdir $inn::spool || die "chdir $inn::spool: $!"; +while (1) { + $S{"lastrun"}=$NOW=time; + printf "%s\n", &wtime($NOW) if ($opt_v>0); + $nt=0; + foreach $c (@classes) { + $t=($NOW-$S{"work$;$c"})*100/$c; + $nt=$t if ($nt<$t); + } + printf "Normal time (class 100): %s\n", &xtime($NOW-$nt) + if ($opt_v>0); + if ($nt<$opt_m*24*60*60) { + printf " capped at minimum %d days\n", $opt_m + if ($opt_v>0); + $nt=$opt_m*24*60*60; + } + if ($nt>180*24*60*60) { + print " capped at maximum 180 days\n" + if ($opt_v>0); + $nt=180*24*60*60; + } + $S{"normaltime"}=$nt; + $decrement=$opt_t*60; + $pass=$need=0; + $x="/"; + undef %needk; undef %needi; + foreach $c (@classes) { + $Dart{$c}=$Dcaf{$c}=$Dkb{$c}=$Dino{$c}=0; + $y=sprintf("time-%02x", $c); + if (-d $y) { + @S=stat(_); + if ($#S>=0) { + $dev{$y}=$S[0]; + unless (defined($needk{$S[0]})) { + $x.=" $y"; + $needk{$S[0]}=$needi{$S[0]}=-1; + } + } + } + $y=sprintf("timecaf-%02x", $c); + if (-d $y) { + @S=stat(_); + if ($#S>=0) { + $dev{$y}=$S[0]; + unless (defined($needk{$S[0]})) { + $x.=" $y"; + $needk{$S[0]}=$needi{$S[0]}=-1; + } + } + } + } + if (open(D, "inndf $x |")) { + while () { + @S=split(/\s+/, $_); + $needk{$dev{$S[0]}}=$opt_f-$S[1] unless ($S[0] eq "/"); + } + close D; + } + if (open(D, "inndf -i $x |")) { + while () { + @S=split(/\s+/, $_); + $needi{$dev{$S[0]}}=$opt_i-$S[1] unless ($S[0] eq "/"); + } + close D; + } + foreach $c (keys %needk) { + printf "Device %d needs to free %d kilobytes, %d inodes\n", + $c, $needk{$c}<0?0:$needk{$c}, $needi{$c}<0?0:$needi{$c} + if ($opt_v>0 && ($needk{$c}>0 || $needi{$c}>0)); + if ($needk{$c}>0 || $needi{$c}>0) { + ++$need; + } + } + if ($opt_v>0 && $need<=0) { + print " (nothing to do)\n"; + $tt=0; + } else { + $error=0; + while (!$error && $need>0) { + if ($S{"normaltime"}-$decrement<$opt_m*24*60*60) { + print " Normal time hit minimum\n" if ($opt_v>0); + last; + } + $S{"normaltime"}-=$decrement; + printf " normal time (100) becomes %ld\n", $S{"normaltime"} + if ($opt_v>2); + ++$pass; + $Dart=$Dcaf=$Dkb=$Dino=$need=0; + foreach $c (keys %needk) { + if ($needk{$c}>0 || $needi{$c}>0) { + ++$need; + } + } + if ($need) { + foreach $c (@classes) { + &worktime($c, $NOW-($S{"normaltime"}*$c/100)); + $Dart+=$dart; $Dcaf+=$dcaf; $Dkb+=$dbb>>10; $Dino+=$dino; + $Dart{$c}+=$dart; $Dcaf{$c}+=$dcaf; + $Dkb{$c}+=$dbb>>10; $Dino{$c}+=$dino; + last if ($error); + } + } + if ($Dart+$Dcaf) { + printf " pass %d deleted %d arts, %d CAFs, %d kb\n", + $pass, $Dart, $Dcaf, $Dkb if ($opt_v>1); + $decrement-=$decrement>>2 if ($decrement>10*60); + } else { + $decrement+=$decrement>>1 if ($decrement<4*60*60); + } + } + $Dkb=$Dart=$Dcaf=$Dino=0; + foreach $c (@classes) { + printf " class %3d: deleted %6d arts %6d CAFs %10d kb\n", + $c, $Dart{$c}, $Dcaf{$c}, $Dkb{$c} if ($opt_v>1); + $Dkb+=$Dkb{$c}; $Dart+=$Dart{$c}; $Dcaf+=$Dcaf{$c}; + } + $tt=time-$NOW; + printf " deleted %d articles, %d CAFs, %d kb in %d seconds\n", + $Dart, $Dcaf, $Dkb, time-$NOW if ($opt_v>0); + if ($tt>$opt_t*60) { + printf STDERR "Round needed %d seconds, interval is %d\n", + $tt, $opt_t*60; + $tt=$opt_t*60; + } + } + sleep $opt_t*60-$tt; +} +&finish(0); + + +sub initclass +{ + my $C=shift; + if (!$S{"blocksize$;$C$;CAF"}) { + # Determine filesystem blocksize + # unfortunately no way in perl to statfs + my $x=sprintf("%s/timecaf-%02x/test%d", $inn::spool, $C, $$); + if (open(A, ">$x")) { + print A "X" x 4096; + close A; + @S=stat $x; + $#S>=12 || die "stat: $!"; + if ($S[12]) { + $S{"blocksize$;$C$;CAF"}=$S[7]/$S[12]; + } else { + $S{"blocksize$;$C$;CAF"}=512; + warn "hack around broken stat blocksize"; + } + unlink $x; + } + } + return if ($S{"oldest$;$C"}); + my $oldest=time; + $S{"oldest$;$C"}=$oldest; + my $base=sprintf("%s/time-%02x", $inn::spool, $C); + my $count=0; + if (chdir $base) { + printf "Finding oldest in class %d (%s)\n", $C, $base if ($opt_v>0); + opendir(D0, "."); + while ($d1=readdir(D0), defined($d1)) { + $d1=~/^[0-9a-f][0-9a-f]$/ || next; + chdir $d1; + opendir(D1, ".") || next; + while ($d2=readdir(D1), defined($d2)) { + $d2=~/^[0-9a-f][0-9a-f]$/ || next; + chdir $d2; + opendir(D2, ".") || next; + while ($a=readdir(D2), defined($a)) { + $a=~/^\./ && next; + @S=stat($a); + $oldest=$S[9] if ($S[9]<$oldest); + ++$count; + } + closedir D2; + chdir ".."; + } + closedir D1; + chdir ".."; + } + closedir D0; + $ac+=$count; + } + $base=sprintf("%s/timecaf-%02x", $inn::spool, $C); + if (chdir $base) { + printf "Finding oldest in class %d (%s)\n", $C, $base if ($opt_v>0); + opendir(D0, "."); + while ($d1=readdir(D0), defined($d1)) { + $d1=~/^[0-9a-f][0-9a-f]$/ || next; + chdir $d1; + opendir(D1, ".") || next; + while ($a=readdir(D1), defined($a)) { + $a=~/^\./ && next; + @S=stat($a); + $oldest=$S[9] if ($S[9]<$oldest); + ++$count; + } + closedir D1; + chdir ".."; + } + closedir D0; + $cc+=$count; + } + $S{"count$;$C"}=$count; + $S{"oldest$;$C"}=$oldest; +} + +sub worktime +{ + my $C=shift; + my $goal=shift; + $goal&=0xFFFFFF00; + printf " goal for class %d becomes %s\n", $C, &xtime($goal) + if ($opt_v>2); + if ($goal>$NOW-$opt_x) { + printf " goal for class %d cut off\n", $C + if ($opt_v>1); + $error=1; + return; + } + $dart=$dcaf=$dbb=$dino=0; + $hdir=sprintf("time-%02x", $C); + $cdir=sprintf("timecaf-%02x", $C); + while (($_=$S{"work$;$C"})<$goal) { + printf " running: %08x\n", $_ if ($opt_v>2); + ($aa,$bb,$cc) = (($_>>24)&0xFF, ($_>>16)&0xFF, ($_>>8)&0xFF); + $dir=sprintf("%s/%02x/%02x", $hdir, $bb, $cc); + $pat=sprintf("[0-9a-f]{4}-%02x[0-9a-f]{2}", $aa); + if (opendir(D, $dir)) { + while ($_=readdir(D), defined($_)) { + /^$pat$/ || next; + $art="$dir/$_"; + @S=stat($art); + if ($#S>=7) { + if ($opt_N) { + print " would delete $art" if ($opt_v>2); + } else { + print " deleting $art" if ($opt_v>2); + unlink $art; + } + ++$dart; ++$dino; + printf " %d kb\n", $S[7]>>10 if ($opt_v>2); + $dbb+=$S[7]; + $needk{$dev{$hdir}}-=$S[7]>>10; + $needi{$dev{$hdir}}--; + } + } + } else { + printf " (no dir %s)\n", $dir if ($opt_v>2); + } + $caf=sprintf("%s/%02x/%02x%02x.CF", $cdir, $bb, $aa, $cc); + @S=stat($caf); + if ($#S>=12) { + if ($opt_N) { + print " would delete $caf" if ($opt_v>2); + } else { + print " deleting $caf" if ($opt_v>2); + unlink $caf; + } + $y=0; + if (open(C, $caf)) { + # try to find how much there is in the CAF + sysread(C, $_, 16); + @C=unpack("a4LLL", $_); + if ($C[0] eq "CRMT") { + $y=$C[3]-$C[1]; + $dart+=$y; + } + close C; + } + ++$dcaf; ++$dino; + if ($S[12]) { + $x=$S[12]*$S{"blocksize$;$C$;CAF"}; + } else { + $x=$S[7]; + warn "hack around broken stat blocksize"; + } + printf " %d arts %d kb\n", $y, $x>>10 if ($opt_v>2); + $dbb+=$x; + $needk{$dev{$cdir}}-=$x>>10; + $needi{$dev{$cdir}}--; + } + $S{"work$;$C"}+=0x100; + $S{"oldest$;$C"}=$S{"work$;$C"} unless ($opt_N); + } +} + +sub report +{ + $NOW=time; + my $cc=$S{"classes"}; + my $nt=$S{"normaltime"}; + unless ($cc && $nt) { + print "Not initialized.\n"; + return; + } + printf "Version: %s (this: %s)\n", $S{"ID"}, $ID; + printf "Started at: %s\n", &xtime($S{"inittime"}) if ($S{"inittime"}); + printf "Last run: %s\n", &xtime($S{"lastrun"}) if ($S{"lastrun"}); + printf "Classes: %s\n", $cc; + foreach $c (split(/,/, $cc)) { + printf "Class %d:\n", $c; + #printf " Initial count %d articles\n", $S{"count$;$c"}; + printf " Oldest article: %s\n", &xtime($S{"oldest$;$c"}); + printf " Expiring at: %s\n", &xtime($S{"work$;$c"}); + printf " Normal time: %s\n", &xtime($NOW-$nt*$c/100); + printf " Filesystem block size (CAF): %d\n", $S{"blocksize$;$c$;CAF"}; + } +} + +sub wtime +{ + my $t=shift; + my @T=localtime($t); + sprintf("%04d-%02d-%02d %02d:%02d", + $T[5]+1900, $T[4]+1, $T[3], $T[2], $T[1]); +} + +sub xtime +{ + my $t=shift; + if ($NOW-$t<0 || $NOW-$t>350*24*60*60) { + return &wtime($t); + } + my @T=localtime($t); + my @D=gmtime($NOW-$t); + sprintf("%04d-%02d-%02d %02d:%02d (%dd %dh %dm)", + $T[5]+1900, $T[4]+1, $T[3], $T[2], $T[1], + $D[7], $D[2], $D[1]); +} + +sub err +{ + printf STDERR "%s\n", shift; + &finish(0); +} + +sub finish +{ + untie(%S); + unlink "$inn::innddir/thdexpire.pid"; + exit 0; +} +#----------------------------------------------------------------------------- diff --git a/contrib/tunefeed.in b/contrib/tunefeed.in new file mode 100644 index 0000000..52616ae --- /dev/null +++ b/contrib/tunefeed.in @@ -0,0 +1,474 @@ +#!/usr/bin/perl +$version = q$Id: tunefeed.in 4329 2001-01-14 13:47:52Z rra $; +# +# tunefeed -- Compare active files with a remote site to tune a feed. +# Copyright 1998 by Russ Allbery +# +# This program is free software; you can redistribute it and/or modify it +# under the same terms as Perl itself. + +############################################################################ +# Site configuration +############################################################################ + +# A list of hierarchies in the Big Eight. +%big8 = map { $_ => 1 } qw(comp humanities misc news rec sci soc talk); + +# A list of hierarchies that are considered global and not language +# hierarchies. +%global = map { $_ => 1 } qw(bionet bit biz borland ddn gnu gov ieee info + linux k12 microsoft netscape tnn vmsnet); + +# The pattern matching local-only hierarchies (that we should disregard when +# doing feed matching). +%ignore = map { $_ => 1 } qw(clari control junk); + + +############################################################################ +# Modules and declarations +############################################################################ + +require 5.003; + +use Getopt::Long qw(GetOptions); + +use strict; +use vars qw(%big8 $days %global %ignore $threshold %traffic $version); + + +############################################################################ +# Active file hashing and analysis +############################################################################ + +# Read in an active file, putting those groups into a hash where the key is +# the name of the group and the value is always 1. If the optional third +# argument is true, exclude any groups in the hierarchies listed in %local +# and use this active file to store traffic information (in a rather +# simple-minded fashion). +sub hash { + my ($file, $hash, $local) = @_; + open (ACTIVE, $file) or die "$0: cannot open $file: $!\n"; + local $_; + while () { + my ($group, $high, $low, $flags) = split; + next if ($flags =~ /^=|^x/); + my $hierarchy = (split (/\./, $group, 2))[0]; + next if ($local && $ignore{$hierarchy}); + $$hash{$group} = 1; + $traffic{$group} = ($high - $low) / $days if $local; + } + close ACTIVE; +} + +# Read in a file that gives traffic statistics. We assume it's in the form +# group, whitespace, number of articles per day, and we just read it +# directly into the %traffic hash. +sub traffic { + my ($file) = @_; + open (TRAFFIC, $file) or die "$0: cannot open $file: $!\n"; + local $_; + while () { + my ($group, $traffic) = split; + $traffic{$group} = $traffic; + } + close TRAFFIC; +} + +# Pull off the first X nodes of a group name. +sub prefix { + my ($group, $count) = @_; + my @group = split (/\./, $group); + splice (@group, $count); + join ('.', @group); +} + +# Find the common hierarchical prefix of a list. +sub common { + my (@list) = @_; + my @prefix = split (/\./, shift @list); + local $_; + while (defined ($_ = shift @list)) { + my @group = split /\./; + my $i; + $i++ while ($prefix[$i] && $prefix[$i] eq $group[$i]); + if ($i <= $#prefix) { splice (@prefix, $i) } + } + join ('.', @prefix); +} + +# Given two lists, a list of groups that the remote site does have and a +# list of groups that the remote site doesn't have, in a single hierarchy, +# perform a smash. The object is to find the minimal pattern that expresses +# just the groups they want. We're also given the common prefix of all the +# groups in the have and exclude lists, and a flag indicating whether we're +# coming in with a positive assumption (all groups sent unless excluded) or +# a negative assumption (no groups sent unless added). +sub smash { + my ($have, $exclude, $top, $positive) = @_; + my (@positive, @negative); + my $level = ($top =~ tr/././) + 1; + + # Start with the positive assumption. We make copies of our @have and + # @exclude arrays since we're going to be needing the virgin ones again + # later for the negative assumption. If we're coming in with the + # negative assumption, we have to add a wildcarded entry to switch + # assumptions, and we also have to deal with the cases where there is a + # real group at the head of the hierarchy. + my @have = @$have; + my @exclude = @$exclude; + if ($top eq $have[0]) { + shift @have; + push (@positive, "$top*") unless $positive; + } else { + if ($top eq $exclude[0]) { + if ($positive && $traffic{$top} > $threshold) { + push (@positive, "!$top"); + } + shift @exclude; + } + push (@positive, "$top.*") unless $positive; + } + + # Now that we've got things started, keep in mind that we're set up so + # that every group will be sent *unless* it's excluded. So we step + # through the list of exclusions. The idea here is to pull together all + # of the exclusions with the same prefix (going one level deeper into + # the newsgroup names than we're currently at), and then find all the + # groups with the same prefix that the remote site *does* want. If + # there aren't any, then we can just exclude that whole prefix provided + # that we're saving enough traffic to make it worthwhile (checked + # against the threshold). If there are, and if the threshold still + # makes it worthwhile to worry about this, we call this sub recursively + # to compute the best pattern for that prefix. + while (defined ($_ = shift @exclude)) { + my ($prefix) = prefix ($_, $level + 1); + my @drop = ($_); + my @keep; + my $traffic = $traffic{$_}; + while ($exclude[0] =~ /^\Q$prefix./) { + $traffic += $traffic{$exclude[0]}; + push (@drop, shift @exclude); + } + $prefix = common (@drop); + my $saved = $traffic; + while (@have && $have[0] le $prefix) { shift @have } + while ($have[0] =~ /^\Q$prefix./) { + $traffic += $traffic{$have[0]}; + push (@keep, shift @have); + } + next unless $saved > $threshold; + if (@keep) { + $traffic{"$prefix*"} = $traffic; + push (@positive, smash (\@keep, \@drop, $prefix, 1)); + } elsif (@drop == 1) { + push (@positive, "!$_"); + } elsif ($prefix eq $_) { + push (@positive, "!$prefix*"); + } else { + push (@positive, "!$prefix.*"); + } + } + + # Now we do essentially the same thing, but from the negative + # perspective (adding a wildcard pattern as necessary to make sure that + # we're not sending all groups and then finding the groups we are + # sending and trying to smash them into minimal wildcard patterns). + @have = @$have; + @exclude = @$exclude; + if ($top eq $exclude[0]) { + shift @exclude; + push (@negative, "!$top*") if $positive; + } else { + if ($top eq $have[0]) { + push (@negative, $top) unless $positive; + shift @have; + } + push (@negative, "!$top.*") if $positive; + } + + # This again looks pretty much the same as what we do for the positive + # case; the primary difference is that we have to make sure that we send + # them every group that they want, so we still err on the side of + # sending too much, rather than too little. + while (defined ($_ = shift @have)) { + my ($prefix) = prefix ($_, $level + 1); + my @keep = ($_); + my @drop; + my $traffic = $traffic{$_}; + while ($have[0] =~ /^\Q$prefix./) { + $traffic += $traffic{$have[0]}; + push (@keep, shift @have); + } + $prefix = common (@keep); + while (@exclude && $exclude[0] le $prefix) { shift @exclude } + my $saved = 0; + while ($exclude[0] =~ /^\Q$prefix./) { + $saved += $traffic{$exclude[0]}; + push (@drop, shift @exclude); + } + if (@drop && $saved > $threshold) { + $traffic{"$prefix*"} = $traffic + $saved; + push (@negative, smash (\@keep, \@drop, $prefix, 0)); + } elsif (@keep == 1) { + push (@negative, $_); + } elsif ($prefix eq $_) { + push (@negative, "$prefix*"); + } else { + push (@negative, "$prefix.*"); + } + } + + # Now that we've built both the positive and negative case, we decide + # which to return. We want the one that's the most succinct, and if + # both descriptions are equally succinct, we return the negative case on + # the grounds that it's likely to send less of what they don't want. + (@positive < @negative) ? @positive : @negative; +} + + +############################################################################ +# Output +############################################################################ + +# We want to sort Big Eight ahead of alt.* ahead of global non-language +# hierarchies ahead of regionals and language hierarchies. +sub score { + my ($hierarchy) = @_; + if ($big8{$hierarchy}) { return 1 } + elsif ($hierarchy eq 'alt') { return 2 } + elsif ($global{$hierarchy}) { return 3 } + else { return 4 } +} + +# Our special sort routine for hierarchies. It calls score to get a +# hierarchy score and sorts on that first. +sub by_hierarchy { + (score $a) <=> (score $b) || $a cmp $b; +} + +# Given a reference to a list of patterns, output it in some reasonable +# form. Currently, this is lines prefixed by a tab, with continuation lines +# like INN likes to have in newsfeeds, 76 column margin, and with a line +# break each time the hierarchy score changes. +sub output { + my ($patterns) = @_; + my ($last, $line); + for (@$patterns) { + my ($hierarchy) = /^!?([^.]+)/; + my $score = score $hierarchy; + $line += 1 + length $_; + if (($last && $score > $last) || $line > 76) { + print ",\\\n\t"; + $line = 8 + length $_; + } elsif ($last) { + print ','; + } else { + print "\t"; + $line += 8; + } + print; + $last = $score; + } + print "\n"; +} + + +############################################################################ +# Main routine +############################################################################ + +# Clean up the name of this program for error messages. +my $fullpath = $0; +$0 =~ s%.*/%%; + +# Parse the command line. Our argument is the path to an active file (we +# tell the difference by seeing if it contains a /). +my ($help, $print_version); +Getopt::Long::config ('bundling'); +GetOptions ('help|h' => \$help, + 'days|d=i' => \$days, + 'threshold|t=i' => \$threshold, + 'version|v' => \$print_version) or exit 1; + +# Set a default for the minimum threshold traffic required to retain an +# exclusion, and assume that active file differences represent one day of +# traffic unless told otherwise. +$threshold = (defined $threshold) ? $threshold : 250; +$days ||= 1; + +# If they asked for our version number, abort and just print that. +if ($print_version) { + my ($program, $ver) = (split (' ', $version))[1,2]; + $program =~ s/,v$//; + die "$program $ver\n"; +} + +# If they asked for help, give them the documentation. +if ($help) { + print "Feeding myself to perldoc, please wait....\n"; + exec ('perldoc', '-t', $fullpath) or die "$0: can't fork: $!\n"; +} + +# Hash the active files, skipping groups we ignore in the local one. Make +# sure we have our two files listed first. +unless (@ARGV == 2 || @ARGV == 3) { + die "Usage: $0 [-hv] [-t ] []\n"; +} +my (%local, %remote); +hash (shift, \%local, 1); +hash (shift, \%remote); +traffic (shift) if @ARGV; + +# Now, we analyze the differences between the two feeds. We're trying to +# build a pattern of what *we* should send *them*, so stuff that's in +# %remote and not in %local doesn't concern us. Rather, we're looking for +# stuff that we carry that they don't, since that's what we'll want to +# exclude from a full feed. +my (%have, %exclude, %count, $have, $exclude, $positive); +for (sort keys %local) { + my ($hierarchy) = (split /\./); + $count{$hierarchy}++; + $traffic{"$hierarchy*"} += $traffic{$_}; + if ($remote{$_}) { push (@{$have{$hierarchy}}, $_); $have++ } + else { push (@{$exclude{$hierarchy}}, $_); $exclude++ } +} +my @patterns; +if ($have > $exclude * 4) { + push (@patterns, "*"); + $positive = 1; +} +for (sort by_hierarchy keys %count) { + if ($have{$_} && !$exclude{$_}) { + push (@patterns, "$_.*") unless $positive; + } elsif ($exclude{$_} && !$have{$_}) { + push (@patterns, "!$_.*") if $positive; + } else { + push (@patterns, smash ($have{$_}, $exclude{$_}, $_, $positive)); + } +} +output (\@patterns); +__END__ + + +############################################################################ +# Documentation +############################################################################ + +=head1 NAME + +tunefeed - Build a newsgroups pattern for a remote feed + +=head1 SYNOPSIS + +B [B<-hv>] [B<-t> I] [B<-d> I] I +I [I] + +=head1 DESCRIPTION + +Given two active files, B generates an INN newsfeeds pattern for +a feed from the first site to the second, that sends the second site +everything in its active file carried by the first site but tries to +minimize the number of rejected articles. It does this by noting +differences between the two active files and then trying to generate +wildcard patterns that cover the similarities without including much (or +any) unwanted traffic. + +I and I should be standard active files. You can probably +get the active file of a site that you feed (provided they're running INN) +by connecting to their NNTP port and typing C. + +B makes an effort to avoid complex patterns when they're of +minimal gain. I is the number of messages per day at which to +worry about excluding a group; if a group the remote site doesn't want to +receive gets below that number of messages per day, then that group is +either sent or not sent depending on which choice results in the simplest +(shortest) wildcard pattern. If you want a pattern that exactly matches +what the remote site wants, use C<-t 0>. + +Ideally, B likes to be given the optional third argument, +I, which points at a file listing traffic numbers for each group. +The format of this file is a group name, whitespace, and then the number +of messages per day it receives. Without such a file, B will +attempt to guess traffic by taking the difference between the high and low +numbers in the active file as the amount of traffic in that group per day. +This will almost always not be accurate, but it should at least be a +ballpark figure. If you know approximately how many days of traffic the +active file numbers represent, you can tell B this information +using the B<-d> flag. + +B's output will look something like: + + comp.*,humanities.classics,misc.*,news.*,rec.*,sci.*,soc.*,talk.*,\ + alt.*,!alt.atheism,!alt.binaries.*,!alt.nocem.misc,!alt.punk*,\ + !alt.sex*,!alt.video.dvd,\ + bionet.*,biz.*,gnu.*,vmsnet.*,\ + ba.*,!ba.jobs.agency,ca.*,sbay.* + +(with each line prefixed by a tab, and with standard INN newsfeeds +continuation syntax). Due to the preferences of the author, it will also +be sorted as Big Eight, then alt.*, then global non-language hierarchies, +then regional and language hierarchies. + +=head1 OPTIONS + +=over 4 + +=item B<-h>, B<--help> + +Print out this documentation (which is done simply by feeding the script +to C. + +=item B<-v>, B<--version> + +Print out the version of B and exit. + +=item B<-d> I, B<--days>=I + +Assume that the difference between the high and low numbers in the active +file represent I days of traffic. + +=item B<-t> I, B<--threshold>=I + +Allow any group with less than I articles per day in traffic to +be either sent or not sent depending on which choice makes the wildcard +patterns simpler. If a threshold isn't specified, the default value is +250. + +=back + +=head1 BUGS + +This program takes a long time to run, not to mention being a nasty memory +hog. The algorithm is thorough, but definitely not very optimized, and +isn't all that friendly. + +Guessing traffic from active file numbers is going to produce very skewed +results on sites with expiration policies that vary widely by group. + +There is no way to optimize for size in avoiding rejections, only quantity +of articles. + +There should be a way to turn off the author's idiosyncratic ordering of +hierarchies, or to specify a different ordering, without editing this +script. + +This script should attempt to retrieve the active file from the remote +site automatically if so desired. + +This script should be able to be given some existing wildcard patterns and +take them into account when generating new ones. + +=head1 CAVEATS + +Please be aware that your neighbor's active file may not accurately +represent the groups they wish to receive from you. As with everything, +choices made by automated programs like this one should be reviewed by a +human and the remote site should be notified, and if they have sent +explicit patterns, those should be honored instead. I definitely do *not* +recommend running this program on any sort of automated basis. + +=head1 AUTHOR + +Russ Allbery Erra@stanford.eduE + +=cut diff --git a/control/Makefile b/control/Makefile new file mode 100644 index 0000000..0d99310 --- /dev/null +++ b/control/Makefile @@ -0,0 +1,51 @@ +## $Id: Makefile 6806 2004-05-18 01:18:57Z rra $ + +include ../Makefile.global + +top = .. + +ALL = controlbatch controlchan docheckgroups gpgverify perl-nocem \ + pgpverify signcontrol + +MAN = ../doc/man/perl-nocem.8 ../doc/man/pgpverify.1 + +all: $(ALL) + +install: all + for F in $(ALL) ; do \ + $(CP_XPUB) $$F $D$(PATHBIN)/$$F ; \ + done + for M in modules/*.pl ; do \ + $(CP_RPUB) $$M $D$(PATHCONTROL)/`basename $$M` ; \ + done + +man: $(MAN) + +clean clobber distclean: + rm -f $(ALL) + +profiled: all +depend: + +$(FIXSCRIPT): + @echo Run configure before running make. See INSTALL for details. + @exit 1 + + +## Build rules. + +FIX = $(FIXSCRIPT) + +controlbatch: controlbatch.in $(FIX) ; $(FIX) controlbatch.in +controlchan: controlchan.in $(FIX) ; $(FIX) controlchan.in +docheckgroups: docheckgroups.in $(FIX) ; $(FIX) docheckgroups.in +gpgverify: gpgverify.in $(FIX) ; $(FIX) gpgverify.in +perl-nocem: perl-nocem.in $(FIX) ; $(FIX) perl-nocem.in +pgpverify: pgpverify.in $(FIX) ; $(FIX) pgpverify.in +signcontrol: signcontrol.in $(FIX) ; $(FIX) -i signcontrol.in + +../doc/man/perl-nocem.8: perl-nocem + $(POD2MAN) -s 8 $? > $@ + +../doc/man/pgpverify.1: pgpverify + $(POD2MAN) -s 1 $? > $@ diff --git a/control/controlbatch.in b/control/controlbatch.in new file mode 100644 index 0000000..72035a8 --- /dev/null +++ b/control/controlbatch.in @@ -0,0 +1,90 @@ +#! /bin/sh +# fixscript will replace this line with code to load innshellvars + +######################################################################## +# controlbatch - Run controlchan against a batch file. +# +# Command usage: controlbatch [feedsite batchfile] +# Defaults are feedsite: controlchan!, batchfile: ${BATCH}/controlchan! +######################################################################## +# +# This script will run controlchan against a batch file. You can use +# it to clear occasional backlogs while running controls from a +# channel, or even skip the channel and run control messages as a file +# feed. +# +######################################################################## +# +# If you're doing the channel thing, you might want to put something +# like this in your crontab to do a cleanup in the wee hours: +# +# 00 04 * * * @prefix@/bin/controlbatch +# +######################################################################## +# +# If you would rather skip the channel and just process controls each +# hour in a batch, use this newsfeeds entry instead of the "stock" +# version: +# +# controlchan!\ +# :!*,control,control.*,!control.cancel\ +# :Tf,Wnsm: +# +# And, a crontab entry something like this: +# +# 30 * * * * @prefix@/bin/controlbatch +# +######################################################################## + +batchlock="${LOCKS}/LOCK.controlbatch" +mypid=$$ + +# A concession to INN 1.x +if [ me${PATHBIN}ow = meow ] ; then + PATHBIN=${NEWSBIN} + export PATHBIN +fi + +# See if we have no arguments and should use the defaults. If there are +# arguments, make sure we have enough to attempt something useful. +if [ me${1}ow != meow ] ; then + if [ me${2}ow = meow ] ; then + echo "Usage: ${0} [feedsite batchfile]" >&2 + exit 0 + else + feedsite=${1} + batchfile=${2} + fi +else + feedsite=controlchan\! + batchfile=controlchan\! +fi + +# Check if any other copies of controlbatch are running. If we are not +# alone, give up here and now. +${PATHBIN}/shlock -p $mypid -f ${batchlock} || exit 0 + +cd ${BATCH} + +if [ -s ${batchfile}.work ] ; then + cat ${batchfile}.work >>${batchfile}.doit + rm -f ${batchfile}.work +fi + +if [ -s ${batchfile} ] ; then + mv ${batchfile} ${batchfile}.work + if ${PATHBIN}/ctlinnd -s -t30 flush ${feedsite} ; then + cat ${batchfile}.work >>${batchfile}.doit + rm -f ${batchfile}.work + fi +fi + +if [ -s ${batchfile}.doit ] ; then + ${PATHBIN}/controlchan \ + < ${batchfile}.doit >> ${MOST_LOGS}/controlbatch.log 2>&1 + # if you want extra assurance that nothing gets lost... + # cat ${batchfile}.doit >> ${batchfile}.done + rm -f ${batchfile}.doit +fi + +rm -f ${batchlock} diff --git a/control/controlchan.in b/control/controlchan.in new file mode 100644 index 0000000..7fe7338 --- /dev/null +++ b/control/controlchan.in @@ -0,0 +1,467 @@ +#! /usr/bin/perl -w +require "/usr/local/news/lib/innshellvars.pl"; + +## $Id: controlchan.in 7591 2006-11-22 07:20:46Z eagle $ +## +## Channel feed program to route control messages to an appropriate handler. +## +## Copyright 2001 by Marco d'Itri +## +## Redistribution and use in source and binary forms, with or without +## modification, are permitted provided that the following conditions +## are met: +## +## 1. Redistributions of source code must retain the above copyright +## notice, this list of conditions and the following disclaimer. +## +## 2. Redistributions in binary form must reproduce the above copyright +## notice, this list of conditions and the following disclaimer in the +## documentation and/or other materials provided with the distribution. +## +## Give this program its own newsfeed. Make sure that you've created +## the newsgroup control.cancel so that you don't have to scan through +## cancels, which this program won't process anyway. +## +## Make a newsfeeds entry like this: +## +## controlchan!\ +## :!*,control,control.*,!control.cancel\ +## :Tc,Wnsm\ +## :@prefix@/bin/controlchan + +require 5.004_03; +use strict; + +delete @ENV{'IFS', 'CDPATH', 'ENV', 'BASH_ENV'}; + +# globals +my ($cachedctl, $curmsgid); +my $lastctl = 0; +my $use_syslog = 0; +my $debug = 0; + +# setup logging ########################################################### +# do not log to syslog if stderr is connected to a console +if (not -t 2) { + eval { require INN::Syslog; import INN::Syslog; $use_syslog = 1; }; + eval { require Sys::Syslog; import Sys::Syslog; $use_syslog = 1; } + unless $use_syslog; +} + +if ($use_syslog) { + eval "sub Sys::Syslog::_PATH_LOG { '/dev/log' }" if $^O eq 'dec_osf'; + Sys::Syslog::setlogsock('unix') if $^O =~ /linux|dec_osf|freebsd|darwin/; + openlog('controlchan', 'pid', $inn::syslog_facility); +} +logmsg('starting'); + +# load modules from the control directory ################################# +opendir(CTL, $inn::controlprogs) + or logdie("Cannot open $inn::controlprogs: $!", 'crit'); +foreach (readdir CTL) { + next if not /^([a-z\.]+\.pl)$/ or not -f "$inn::controlprogs/$_"; + eval { require "$inn::controlprogs/$1" }; + if ($@) { + $@ =~ s/\n/ /g; + logdie($@, 'crit'); + } + logmsg("loaded $inn::controlprogs/$1", 'debug'); +} +closedir CTL; + +# main loop ############################################################### +while () { + chop; + my ($token, $sitepath, $msgid) = split(/\s+/, $_); + next if not defined $token; + $sitepath ||= ''; + $curmsgid = $msgid || ''; + + my $artfh = open_article($token); + next if not defined $artfh; + + # suck in headers and body, normalize the strange ones + my (@headers, @body, %hdr); + if (not parse_article($artfh, \@headers, \@body, \%hdr)) { + close $artfh; + next; + } + close $artfh or logdie('sm died with status ' . ($? >> 8)); + + next if not exists $hdr{control}; + + $curmsgid = $hdr{'message-id'}; + my $sender = cleanaddr($hdr{sender} || $hdr{from}); + my $replyto = cleanaddr($hdr{'reply-to'} || $hdr{from}); + + my (@progparams, $progname); + if ($hdr{control} =~ /\s/) { + $hdr{control} =~ /^(\S+)\s+(.+)?/; + $progname = lc $1; + @progparams = split(/\s+/, lc $2) if $2; + } else { + $progname = lc $hdr{control}; + } + + next if $progname eq 'cancel'; + + if ($progname !~ /^([a-z]+)$/) { + logmsg("Naughty control in article $curmsgid ($progname)"); + next; + } + $progname = $1; + + # Do we want to process the message? Let's check the permissions. + my ($action, $logname, $newsgrouppats) = + ctlperm($progname, $sender, $progparams[0], + $token, \@headers, \@body); + + next if $action eq 'drop'; + + if ($action eq '_pgpfail') { + my $type = ''; + if ($progname and $progname eq 'newgroup') { + if ($progparams[1] and $progparams[1] eq 'moderated') { + $type = 'm '; + } else { + $type = 'y '; + } + } + logmsg("skipping $progname $type$sender" + . "(pgpverify failed) in $curmsgid"); + next; + } + + # used by checkgroups. Convert from perl regexp to grep regexp. + if (local $_ = $newsgrouppats) { + s/\$\|/|/g; + s/[^\\]\.[^*]/?/g; + s/\$//; + s/\.\*/*/g; + s/\\([\$\+\.])/$1/g; + $progparams[0] = $_; + } + + # find the appropriate module and call it + my $subname = "control_$progname"; + my $subfind = \&$subname; + if (not defined &$subfind) { + if ($logname) { + logger($logname, "Unknown control message by $sender", + \@headers, \@body); + } else { + logmsg("Unknown \"$progname\" control by $sender"); + } + next; + } + + my $approved = $hdr{approved} ? 1 : 0; + logmsg("$subname, " . join(' ', @progparams) + . " $sender $replyto $token, $sitepath, $action" + . ($logname ? "=$logname" : '') .", $approved"); + + &$subfind(\@progparams, $sender, $replyto, $sitepath, + $action, $logname, $approved, \@headers, \@body); +} + +closelog() if $use_syslog; +exit 0; + +print $inn::most_logs.$inn::syslog_facility.$inn::mta. + $inn::newsmaster.$inn::locks; # lint food + +# misc functions ########################################################## +sub parse_article { + my ($artfh, $headers, $body, $hdr) = @_; + my $h; + my %uniquehdr = map { $_ => 1 } qw(date followup-to from message-id + newsgroups path reply-to subject sender); + + while (<$artfh>) { + s/\r?\n$//; + last if /^$/; + push @$headers, $_; + if (/^(\S+):\s+(.+)/) { + $h = lc $1; + if (exists $hdr->{$h}) { + if (exists $uniquehdr{$h}) { + logmsg("Multiple $1 headers in article $curmsgid"); + return 0; + } + $hdr->{$h} .= ' ' . $2; + } else { + $hdr->{$h} = $2; + } + next; + } elsif (/^\s+(.+)/) { + if (defined $h) { + $hdr->{$h} .= ' ' . $1; + next; + } + } + logmsg("Broken headers in article $curmsgid"); + return 0; + } + + # article is empty or does not exist + return 0 if not @$headers; + + chop (@$body = <$artfh>); + return 1; +} + +# Strip a mail address, innd-style. +sub cleanaddr { + local $_ = shift; + s/(\s+)?\(.*\)(\s+)?//g; + s/.*<(.*)>.*/$1/; + s/[^-a-zA-Z0-9+_.@%]/_/g; # protect MTA + s/^-/_/; # protect MTA + return $_; +} + +# Read and cache control.ctl. +sub readctlfile { + my $mtime = (stat($inn::ctlfile))[9]; + return $cachedctl if $lastctl == $mtime; # mtime has not changed. + $lastctl = $mtime; + + my @ctllist; + open(CTLFILE, $inn::ctlfile) + or logdie("Cannot open $inn::ctlfile: $!", 'crit'); + while () { + chop; + # Not a comment or blank? Convert wildmat to regex + next if not /^(\s+)?[^\#]/ or /^$/; + if (not /:(?:doit|doifarg|drop|log|mail|verify-.*)(?:=.*)?$/) { + s/.*://; + logmsg("$_ is not a valid action for control.ctl", 'err'); + next; + } + # Convert to a : separated list of regexps + s/^all:/*:/i; + s/([\$\+\.])/\\$1/g; + s/\*/.*/g; + s/\?/./g; + s/(.*)/^$1\$/; + s/:/\$:^/g; + s/\|/\$|^/g; + push @ctllist, $_; + } + close CTLFILE; + + logmsg('warning: control.ctl is empty!', 'err') if not @ctllist; + return $cachedctl = [ reverse @ctllist ]; +} + +# Parse a control message's permissions. +sub ctlperm { + my ($type, $sender, $newsgroup, $token, $headers, $body) = @_; + + my $action = 'drop'; # default + my ($logname, $hier); + + # newgroup and rmgroup require newsgroup names; check explicitly for that + # here and return drop if the newsgroup is missing (to avoid a bunch of + # warnings from undefined values later on in permission checking). + if ($type eq 'newgroup' or $type eq 'rmgroup') { + unless ($newsgroup) { + return ('drop', undef, undef); + } + } + + my $ctllist = readctlfile(); + foreach (@$ctllist) { + my @ctlline = split /:/; + # 0: type 1: from@addr 2: group.* 3: action + if ($type =~ /$ctlline[0]/ and $sender =~ /$ctlline[1]/i and + ($type !~ /(?:new|rm)group/ or $newsgroup =~ /$ctlline[2]/)) { + $action = $ctlline[3]; + $action =~ s/\^(.+)\$/$1/; + $action =~ s/\\//g; + $hier = $ctlline[2] if $type eq 'checkgroups'; + last; + } + } + + ($action, $logname) = split(/=/, $action); + + if ($action =~ /^verify-(.+)/) { + my $keyowner = $1; + if ($inn::pgpverify and $inn::pgpverify =~ /^(?:true|on|yes)$/i) { + my $pgpresult = defined &local_pgpverify ? + local_pgpverify($token, $headers, $body) : pgpverify($token); + if ($keyowner eq $pgpresult) { + $action = 'doit'; + } else { + $action = '_pgpfail'; + } + } else { + $action = 'mail'; + } + } + + return ($action, $logname, $hier); +} + +# Write stuff to a log or send mail to the news admin. +sub logger { + my ($logfile, $message, $headers, $body) = @_; + + if ($logfile eq 'mail') { + my $mail = sendmail($message); + print $mail map { s/^~/~~/; "$_\n" } @$headers; + print $mail "\n" . join ('', map { s/^~/~~/; "$_\n" } @$body) + if $body; + close $mail or logdie("Cannot send mail: $!"); + return; + } + + if ($logfile =~ /^([^.\/].*)/) { + $logfile = $1; + } else { + logmsg("Invalid log file: $logfile", 'err'); + $logfile = 'control'; + } + + $logfile = "$inn::most_logs/$logfile.log" unless $logfile =~ /^\//; + my $lockfile = $logfile; + $lockfile =~ s#.*/##; + $lockfile = "$inn::locks/LOCK.$lockfile"; + shlock($lockfile); + + open(LOGFILE, ">>$logfile") or logdie("Cannot open $logfile: $!"); + print LOGFILE "$message\n"; + foreach (@$headers, '', @$body, '') { + print LOGFILE " $_\n"; + } + close LOGFILE; + unlink $lockfile; +} + +# write to syslog or errlog +sub logmsg { + my ($msg, $lvl) = @_; + + return if $lvl and $lvl eq 'debug' and not $debug; + if ($use_syslog) { + syslog($lvl || 'notice', '%s', $msg); + } else { + print STDERR (scalar localtime) . ": $msg\n"; + } +} + +# log a message and then die +sub logdie { + my ($msg, $lvl) = @_; + + $msg .= " ($curmsgid)" if $curmsgid; + logmsg($msg, $lvl || 'err'); + exit 1; +} + +# wrappers executing external programs #################################### + +# Open an article appropriately to our storage method (or lack thereof). +sub open_article { + my $token = shift; + + if ($token =~ /^\@.+\@$/) { + my $pid = open(ART, '-|'); + logdie('Cannot fork: ' . $!) if $pid < 0; + if ($pid == 0) { + exec("$inn::newsbin/sm", '-q', $token) or + logdie("Cannot exec sm: $!"); + } + return *ART; + } else { + return *ART if open(ART, $token); + logmsg("Cannot open article $token: $!"); + } + return undef; +} + +sub pgpverify { + my $token = shift; + + if ($token =~ /^\@.+\@$/) { + open(PGPCHECK, "$inn::newsbin/sm -q $token " + . "| $inn::newsbin/pgpverify |") or goto ERROR; + } else { + open(PGPCHECK, "$inn::newsbin/pgpverify < $token |") or goto ERROR; + } + my $pgpresult = ; + close PGPCHECK or goto ERROR; + $pgpresult ||= ''; + chop $pgpresult; + return $pgpresult; +ERROR: + logmsg("pgpverify failed: $!", 'debug'); + return ''; +} + +sub ctlinnd { + my ($cmd, @args) = @_; + + my $st = system("$inn::newsbin/ctlinnd", '-s', $cmd, @args); + logdie('Cannot run ctlinnd: ' . $!) if $st == -1; + logdie('ctlinnd returned status ' . ($st & 255)) if $st > 0; +} + +sub shlock { + my $lockfile = shift; + + my $locktry = 0; + while ($locktry < 60) { + if (system("$inn::newsbin/shlock", '-p', $$, '-f', $lockfile) == 0) { + return 1; + } + $locktry++; + sleep 2; + } + + my $lockreason; + if (open(LOCKFILE, $lockfile)) { + $lockreason = 'held by ' . ( || '?'); + close LOCKFILE; + } else { + $lockreason = $!; + } + logdie("Cannot get lock $lockfile: $lockreason"); + return undef; +} + +# If $body is not defined, returns a file handle which must be closed. +# Don't forget checking the return value of close(). +# $addresses may be a scalar or a reference to a list of addresses. +# If not defined, $inn::newsmaster is the default. +# parts of this code stolen from innmail.pl +sub sendmail { + my ($subject, $addresses, $body) = @_; + $addresses = [ $addresses || $inn::newsmaster ] if not ref $addresses; + $subject ||= '(no subject)'; + + # fix up all addresses + my @addrs = map { s#[^-a-zA-Z0-9+_.@%]##g; $_ } @$addresses; + + my $sm = $inn::mta; + if ($sm =~ /%s/) { + $sm = sprintf($sm, join(' ', @addrs)); + } else { + $sm .= ' ' . join(' ', @addrs); + } + + # fork and spawn the MTA whitout using the shell + my $pid = open(MTA, '|-'); + logdie('Cannot fork: ' . $!) if $pid < 0; + if ($pid == 0) { + exec(split(/\s+/, $sm)) or logdie("Cannot exec $sm: $!"); + } + + print MTA 'To: ' . join(",\n\t", @addrs) . "\nSubject: $subject\n\n"; + return *MTA if not defined $body; + $body = join("\n", @$body) if ref $body eq 'ARRAY'; + print MTA $body . "\n"; + close MTA or logdie("Execution of $sm failed: $!"); + return 1; +} diff --git a/control/docheckgroups.in b/control/docheckgroups.in new file mode 100644 index 0000000..cee70d6 --- /dev/null +++ b/control/docheckgroups.in @@ -0,0 +1,149 @@ +#! /bin/sh +# fixscript will replace this line with code to load innshellvars + +## $Revision: 7743 $ +## Script to execute checkgroups text; results to stdout. + +T=${TMPDIR} + +cat /dev/null >${T}/$$out + +## Copy the article without headers, append local newsgroups. +cat >${T}/$$msg +test -f ${LOCALGROUPS} && cat ${LOCALGROUPS} >>${T}/$$msg + +## Get the top-level newsgroup names from the message and turn it into +## an egrep pattern. +PATS=`${SED} <${T}/$$msg \ + -e 's/[ ].*//' -e 's/\..*//' \ + -e 's/^!//' -e '/^$/d' \ + -e 's/^/^/' -e 's/$/[. ]/' \ + | sort -u \ + | (tr '\012' '|' ; echo '' )\ + | ${SED} -e 's/|$//'` + +${EGREP} "${PATS}" ${ACTIVE} | ${EGREP} "${1:-.}" | ${SED} 's/ .*//' | sort >${T}/$$active +${EGREP} "${PATS}" ${T}/$$msg | ${EGREP} "${1:-.}" | ${SED} 's/[ ].*//' | sort >${T}/$$newsgrps + +comm -13 ${T}/$$active ${T}/$$newsgrps >${T}/$$missing +comm -23 ${T}/$$active ${T}/$$newsgrps >${T}/$$remove + +${EGREP} "${PATS}" ${ACTIVE} | ${EGREP} "${1:-.}" | ${SED} -n '/ m$/s/ .*//p' | sort >${T}/$$amod.all +${EGREP} "${PATS}" ${T}/$$msg | ${EGREP} "${1:-.}" | ${SED} 's/\r\?$//' | +${SED} -n '/(Moderated)$/s/[ ].*//p' | sort >${T}/$$ng.mod + +comm -12 ${T}/$$missing ${T}/$$ng.mod >${T}/$$add.mod +comm -23 ${T}/$$missing ${T}/$$ng.mod >${T}/$$add.unmod +cat ${T}/$$add.mod ${T}/$$add.unmod >>${T}/$$add + +comm -23 ${T}/$$amod.all ${T}/$$remove >${T}/$$amod +comm -13 ${T}/$$ng.mod ${T}/$$amod >${T}/$$ismod +comm -23 ${T}/$$ng.mod ${T}/$$amod >${T}/$$nm.all +comm -23 ${T}/$$nm.all ${T}/$$add >${T}/$$notmod + +${EGREP} "${PATS}" ${NEWSGROUPS} | ${EGREP} "${1:-.}" | ${SED} 's/[ ]\+/ /' | sort >${T}/$$localdesc +${EGREP} "${PATS}" ${T}/$$msg | ${EGREP} "${1:-.}" | ${SED} 's/\r\?$//' | +${SED} 's/[ ]\+/ /' | sort >${T}/$$newdesc + +if ! (head -1 ${T}/$$newdesc | egrep " [[:digit:]]+ [[:digit:]]+ " > /dev/null) ; then + comm -13 ${T}/$$localdesc ${T}/$$newdesc >${T}/$$missingdesc + comm -23 ${T}/$$localdesc ${T}/$$newdesc >${T}/$$removedesc +fi + +if [ -s ${T}/$$remove ] ; then + ( + echo "# The following newsgroups are non-standard." + ${SED} "s/^/# /" ${T}/$$remove + echo "# You can remove them by executing the commands:" + for i in `cat ${T}/$$remove` ; do + echo " ${PATHBIN}/ctlinnd rmgroup $i" + ${EGREP} "^$i " ${NEWSGROUPS} >>${T}/$$ngdel + done + echo '' + ) >>${T}/$$out +fi + +if [ -s ${T}/$$add ] ; then + ( + echo "# The following newsgroups were missing and should be added." + ${SED} "s/^/# /" ${T}/$$add + echo "# You can do this by executing the command(s):" + for i in `cat ${T}/$$add.unmod` ; do + echo " ${PATHBIN}/ctlinnd newgroup $i y ${FROM}" + ${EGREP} "^$i " ${T}/$$msg >>${T}/$$ngadd + done + for i in `cat ${T}/$$add.mod` ; do + echo " ${PATHBIN}/ctlinnd newgroup $i m ${FROM}" + ${EGREP} "^$i " ${T}/$$msg >>${T}/$$ngadd + done + echo '' + ) >>${T}/$$out +fi + +if [ -s ${T}/$$ismod ] ; then + ( + echo "# The following groups are incorrectly marked as moderated:" + ${SED} "s/^/# /" ${T}/$$ismod + echo "# You can correct this by executing the following:" + for i in `cat ${T}/$$ismod` ; do + echo " ${PATHBIN}/ctlinnd changegroup $i y" + ${EGREP} "^$i " ${T}/$$msg >>${T}/$$ngchng + done + echo '' + ) >>${T}/$$out +fi + +if [ -s ${T}/$$notmod ] ; then + ( + echo "# The following groups are incorrectly marked as unmoderated:" + ${SED} "s/^/# /" ${T}/$$notmod + echo "# You can correct this by executing the following:" + for i in `cat ${T}/$$notmod` ;do + echo " ${PATHBIN}/ctlinnd changegroup $i m" + ${EGREP} "^$i " ${T}/$$msg >>${T}/$$ngchng + done + echo '' + ) >>${T}/$$out +fi + +if [ -s ${T}/$$removedesc ] ; then + ( + echo "# The following newsgroups descriptions are obsolete." + ${SED} "s/^/# /" ${T}/$$removedesc + echo "# You can remove them by editing ${NEWSGROUPS}." + echo '' + ) >>${T}/$$out +fi + +if [ -s ${T}/$$missingdesc ] ; then + ( + echo "# The following newsgroups descriptions were missing and should be added." + ${SED} "s/^/# /" ${T}/$$missingdesc + echo "# You can add them by editing ${NEWSGROUPS}." + echo '' + ) >>${T}/$$out +fi + + +test -s ${T}/$$out && { + cat ${T}/$$out + echo 'exit # so you can feed this message into the shell' + echo "# And remember to update ${NEWSGROUPS}." + test -s ${T}/$$ngdel && { + echo "# Remove these lines:" + ${SED} "s/^/# /" ${T}/$$ngdel + echo '' + } + test -s ${T}/$$ngadd && { + echo "# Add these lines:" + ${SED} "s/^/# /" ${T}/$$ngadd + echo '' + } + test -s ${T}/$$ngchng && { + echo "# Change these lines:" + ${SED} "s/^/# /" ${T}/$$ngchng + echo '' + } +} + +rm -f ${T}/$$* diff --git a/control/gpgverify.in b/control/gpgverify.in new file mode 100644 index 0000000..f3aecea --- /dev/null +++ b/control/gpgverify.in @@ -0,0 +1,237 @@ +#!/usr/bin/perl -w +require '/etc/news/innshellvars.pl'; + +# written April 1996, tale@isc.org (David C Lawrence) +# mostly rewritten 2001-03-21 by Marco d'Itri +# +# requirements: +# - GnuPG +# - perl 5.004_03 and working Sys::Syslog +# - syslog daemon +# +# There is no locking because gpg is supposed to not need it and controlchan +# will serialize control messages processing anyway. + +require 5.004_03; +use strict; + +# if you keep your keyring somewhere that is not the default used by gpg, +# change the location below. +my $keyring; +if ($inn::newsetc && -d "$inn::newsetc/pgp") { + $keyring = $inn::newsetc . '/pgp/pubring.gpg'; +} + +# If you have INN and the script is able to successfully include your +# innshellvars.pl file, the value of the next two variables will be +# overridden. +my $tmpdir = '/var/log/news/'; +my $syslog_facility = 'news'; + +# 1: print PGP output +my $debug = 0; +#$debug = 1 if -t 1; + +### Exit value: +### 0 good signature +### 1 no signature +### 2 unknown signature +### 3 bad signature +### 255 problem not directly related to gpg analysis of signature + +############################################################################## +################ NO USER SERVICEABLE PARTS BELOW THIS COMMENT ################ +############################################################################## +my $tmp = ($inn::pathtmp ? $inn::pathtmp : $tmpdir) . "/pgp$$"; +$syslog_facility = $inn::syslog_facility if $inn::syslog_facility; + +my $nntp_format = 0; +$0 =~ s#^.*/##; # trim /path/to/prog to prog + +die "Usage: $0 < message\n" if $#ARGV != -1; + +# Path to gpg binary +my $gpg; +if ($inn::gpgv) { + $gpg = $inn::gpgv; +} else { + foreach (split(/:/, $ENV{PATH}), qw(/usr/local/bin /opt/gnu/bin)) { + if (-x "$_/gpgv") { + $gpg = "$_/gpgv"; last; + } + } +} +fail('cannot find the gpgv binary') if not $gpg; + +# this is, by design, case-sensitive with regards to the headers it checks. +# it's also insistent about the colon-space rule. +my ($label, $value, %dup, %header); +while () { + # if a header line ends with \r\n, this article is in the encoding + # it would be in during an NNTP session. some article storage + # managers keep them this way for efficiency. + $nntp_format = /\r\n$/ if $. == 1; + s/\r?\n$//; + + last if /^$/; + if (/^(\S+):[ \t](.+)/) { + ($label, $value) = ($1, $2); + $dup{$label} = 1 if $header{$label}; + $header{$label} = $value; + } elsif (/^\s/) { + fail("non-header at line $.: $_") unless $label; + $header{$label} .= "\n$_"; + } else { + fail("non-header at line $.: $_"); + } +} + +my $pgpheader = 'X-PGP-Sig'; +$_ = $header{$pgpheader}; +exit 1 if not $_; # no signature + +# the $sep value means the separator between the radix64 signature lines +# can have any amount of spaces or tabs, but must have at least one space +# or tab, if there is a newline then the space or tab has to follow the +# newline. any number of newlines can appear as long as each is followed +# by at least one space or tab. *phew* +my $sep = "[ \t]*(\n?[ \t]+)+"; +# match all of the characters in a radix64 string +my $r64 = '[a-zA-Z0-9+/]'; +fail("$pgpheader not in expected format") + unless /^(\S+)$sep(\S+)(($sep$r64{64})+$sep$r64+=?=?$sep=$r64{4})$/; + +my ($version, $signed_headers, $signature) = ($1, $3, $4); +$signature =~ s/$sep/\n/g; + +my $message = "-----BEGIN PGP SIGNED MESSAGE-----\n\n" + . "X-Signed-Headers: $signed_headers\n"; + +foreach $label (split(',', $signed_headers)) { + fail("duplicate signed $label header, can't verify") if $dup{$label}; + $message .= "$label: "; + $message .= $header{$label} if $header{$label}; + $message .= "\n"; +} +$message .= "\n"; # end of headers + +while () { # read body lines + if ($nntp_format) { + # check for end of article; some news servers (eg, Highwind's + # "Breeze") include the dot-CRLF of the NNTP protocol in the + # article data passed to this script + last if $_ eq ".\r\n"; + + # remove NNTP encoding + s/^\.\./\./; + s/\r\n$/\n/; + } + + s/^-/- -/; # pgp quote ("ASCII armor") dashes + $message .= $_; +} + +$message .= + "\n-----BEGIN PGP SIGNATURE-----\n" . + "Version: $version\n" . + $signature . + "\n-----END PGP SIGNATURE-----\n"; + +open(TMP, ">$tmp") or fail("open $tmp: $!"); +print TMP $message; +close TMP or errmsg("close $tmp: $!"); + +my $opts = '--quiet --status-fd=1 --logger-fd=1'; +$opts .= " --keyring=$keyring" if $keyring; + +open(PGP, "$gpg $opts $tmp |") or fail("failed to execute $gpg: $!"); + +undef $/; +$_ = ; + +unlink $tmp or errmsg("unlink $tmp: $!"); + +if (not close PGP) { + if ($? >> 8) { + my $status = $? >> 8; + errmsg("gpg exited status $status") if $status > 1; + } else { + errmsg('gpg died on signal ' . ($? & 255)); + } +} + +print STDERR $_ if $debug; + +my $ok = 255; # default exit status +my $signer; +if (/^\[GNUPG:\]\s+GOODSIG\s+\S+\s+(\S+)/m) { + $ok = 0; + $signer = $1; +} elsif (/^\[GNUPG:\]\s+NODATA/m or /^\[GNUPG:\]\s+UNEXPECTED/m) { + $ok = 1; +} elsif (/^\[GNUPG:\]\s+NO_PUBKEY/m) { + $ok = 2; +} elsif (/^\[GNUPG:\]\s+BADSIG\s+/m) { + $ok = 3; +} + +print "$signer\n" if $signer; +exit $ok; + +sub errmsg { + my $msg = $_[0]; + + eval 'use Sys::Syslog qw(:DEFAULT setlogsock)'; + die "$0: cannot use Sys::Syslog: $@ [$msg]\n" if $@; + + die "$0: cannot set syslog method [$msg]\n" + if not (setlogsock('unix') or setlogsock('inet')); + + $msg .= " processing $header{'Message-ID'}" if $header{'Message-ID'}; + + openlog($0, 'pid', $syslog_facility); + syslog('err', '%s', $msg); + closelog(); +} + +sub fail { + errmsg($_[0]); + unlink $tmp; + exit 255; +} + +__END__ + +# Copyright 2000 by Marco d'Itri + +# License of the original version distributed by David C. Lawrence: + +# Copyright (c) 1996 UUNET Technologies, Inc. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. All advertising materials mentioning features or use of this software +# must display the following acknowledgement: +# This product includes software developed by UUNET Technologies, Inc. +# 4. The name of UUNET Technologies ("UUNET") may not be used to endorse or +# promote products derived from this software without specific prior +# written permission. +# +# THIS SOFTWARE IS PROVIDED BY UUNET ``AS IS'' AND ANY EXPRESS OR +# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL UUNET BE LIABLE FOR ANY DIRECT, +# INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, +# STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED +# OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/control/modules/checkgroups.pl b/control/modules/checkgroups.pl new file mode 100644 index 0000000..56366ad --- /dev/null +++ b/control/modules/checkgroups.pl @@ -0,0 +1,89 @@ +## $Id: checkgroups.pl 7743 2008-04-06 10:04:43Z iulius $ +## +## checkgroups control message handler. +## +## Copyright 2001 by Marco d'Itri +## +## Redistribution and use in source and binary forms, with or without +## modification, are permitted provided that the following conditions +## are met: +## +## 1. Redistributions of source code must retain the above copyright +## notice, this list of conditions and the following disclaimer. +## +## 2. Redistributions in binary form must reproduce the above copyright +## notice, this list of conditions and the following disclaimer in the +## documentation and/or other materials provided with the distribution. + +use strict; + +sub control_checkgroups { + my ($par, $sender, $replyto, $site, $action, $log, $approved, + $headers, $body) = @_; + my ($newsgrouppats) = @$par; + + if ($action eq 'mail') { + my $mail = sendmail("checkgroups by $sender"); + print $mail "$sender posted the following checkgroups message:\n"; + print $mail map { s/^~/~~/; "$_\n" } @$headers; + print $mail <$tempfile.art") + or logdie("Cannot open $tempfile.art: $!"); + print TEMPART map { s/^~/~~/; "$_\n" } @$body; + close TEMPART; + + open(OLDIN, '<&STDIN') or die $!; + open(OLDOUT, '>&STDOUT') or die $!; + open(STDIN, "$tempfile.art") or die $!; + open(STDOUT, ">$tempfile") or die $!; + my $st = system("$inn::pathbin/docheckgroups", $newsgrouppats); + logdie('Cannot run docheckgroups: ' . $!) if $st == -1; + logdie('docheckgroups returned status ' . ($st & 255)) if $st > 0; + close(STDIN); + close(STDOUT); + open(STDIN, '<&OLDIN') or die $!; + open(STDOUT, '>&OLDOUT') or die $!; + + open(TEMPFILE, $tempfile) or logdie("Cannot open $tempfile: $!"); + my @output = ; + chop @output; + # There is no need to send an empty mail. + if ($#output > 0) { + logger($log || 'mail', "checkgroups by $sender", \@output); + } else { + logmsg("checkgroups by $sender processed (no change)"); + } + close TEMPFILE; + unlink($tempfile, "$tempfile.art"); +} + +1; diff --git a/control/modules/ihave.pl b/control/modules/ihave.pl new file mode 100644 index 0000000..a64c235 --- /dev/null +++ b/control/modules/ihave.pl @@ -0,0 +1,58 @@ +## $Id: ihave.pl 4932 2001-07-19 00:32:56Z rra $ +## +## ihave control message handler. +## +## Copyright 2001 by Marco d'Itri +## +## Redistribution and use in source and binary forms, with or without +## modification, are permitted provided that the following conditions +## are met: +## +## 1. Redistributions of source code must retain the above copyright +## notice, this list of conditions and the following disclaimer. +## +## 2. Redistributions in binary form must reproduce the above copyright +## notice, this list of conditions and the following disclaimer in the +## documentation and/or other materials provided with the distribution. + +use strict; + +sub control_ihave { + my ($par, $sender, $replyto, $site, $action, $log, $approved, + $headers, $body) = @_; + + if ($action eq 'mail') { + my $mail = sendmail("ihave by $sender"); + print $mail map { s/^~/~~/; "$_\n" } @$body; + close $mail or logdie('Cannot send mail: ' . $!); + } elsif ($action eq 'log') { + if ($log) { + logger($log, "ihave $sender", $headers, $body); + } else { + logmsg("ihave $sender"); + } + } elsif ($action eq 'doit') { + my $tempfile = "$inn::tmpdir/ihave.$$"; + open(GREPHIST, "|grephistory -i > $tempfile") + or logdie('Cannot run grephistory: ' . $!); + foreach (@$body) { + print GREPHIST "$_\n"; + } + close GREPHIST; + + if (-s $tempfile) { + my $inews = open("$inn::inews -h") + or logdie('Cannot run inews: ' . $!); + print $inews "Newsgroups: to.$site\n" + . "Subject: cmsg sendme $inn::pathhost\n" + . "Control: sendme $inn::pathhost\n\n"; + open(TEMPFILE, $tempfile) or logdie("Cannot open $tempfile: $!"); + print $inews $_ while ; + close $inews or die $!; + close TEMPFILE; + } + unlink $tempfile; + } +} + +1; diff --git a/control/modules/newgroup.pl b/control/modules/newgroup.pl new file mode 100644 index 0000000..94eef22 --- /dev/null +++ b/control/modules/newgroup.pl @@ -0,0 +1,214 @@ +## $Id: newgroup.pl 7849 2008-05-25 17:11:32Z iulius $ +## +## newgroup control message handler. +## +## Copyright 2001 by Marco d'Itri +## +## Redistribution and use in source and binary forms, with or without +## modification, are permitted provided that the following conditions +## are met: +## +## 1. Redistributions of source code must retain the above copyright +## notice, this list of conditions and the following disclaimer. +## +## 2. Redistributions in binary form must reproduce the above copyright +## notice, this list of conditions and the following disclaimer in the +## documentation and/or other materials provided with the distribution. + +use strict; + +sub control_newgroup { + my ($par, $sender, $replyto, $site, $action, $log, $approved, + $headers, $body) = @_; + my ($groupname, $modflag) = @$par; + + $modflag ||= ''; + my $modcmd = $modflag eq 'moderated' ? 'm' : 'y'; + + my $errmsg; + $errmsg= local_checkgroupname($groupname) if defined &local_checkgroupname; + if ($errmsg) { + $errmsg = checkgroupname($groupname) if $errmsg eq 'DONE'; + + if ($log) { + logger($log, "skipping newgroup ($errmsg)", $headers, $body); + } else { + logmsg("skipping newgroup ($errmsg)"); + } + return; + } + + # Scan active to see what sort of change we are making. + open(ACTIVE, $inn::active) or logdie("Cannot open $inn::active: $!"); + my @oldgroup; + while () { + next unless /^(\Q$groupname\E)\s\d+\s\d+\s(\w)/; + @oldgroup = split /\s+/; + last; + } + close ACTIVE; + + my $status; + my $ngdesc = 'No description.'; + my $olddesc = ''; + my $ngname = $groupname; + + # If there is a tag line, search whether the description has changed. + my $found = 0; + my $ngline = ''; + foreach (@$body) { + if ($found) { + # It is the line which contains the description. + $ngline = $_; + last; + } + $found = 1 if $_ =~ /^For your newsgroups file:\s*$/; + } + + if ($found) { + ($ngname, $ngdesc) = split(/\s+/, $ngline, 2); + if ($ngdesc) { + $ngdesc =~ s/\s+$//; + $ngdesc =~ s/\s+\(moderated\)\s*$//i; + $ngdesc .= ' (Moderated)' if $modflag eq 'moderated'; + } + # Scan newsgroups to see the previous description, if any. + open(NEWSGROUPS, $inn::newsgroups) + or logdie("Cannot open $inn::newsgroups: $!"); + while () { + if (/^\Q$groupname\E\s+(.*)/) { + $olddesc = $1; + last; + } + } + close NEWSGROUPS; + } + + if (@oldgroup) { + if ($oldgroup[3] eq 'm' and $modflag ne 'moderated') { + $status = 'be made unmoderated'; + } elsif ($oldgroup[3] ne 'm' and $modflag eq 'moderated') { + $status = 'be made moderated'; + } else { + if ($ngdesc eq $olddesc) { + $status = 'no change'; + } else { + $status = 'have a new description'; + } + } + } elsif (not $approved) { + $status = 'unapproved'; + } else { + $status = 'be created'; + } + + if ($action eq 'mail' and $status !~ /(no change|unapproved)/) { + my $mail = sendmail("newgroup $groupname $modcmd $sender"); + print $mail <$tempfile") or logdie("Cannot open $tempfile: $!"); + while () { + next if (/^\Q$name\E\s+(.*)/); + print TEMPFILE $_; + } + # We now write a pretty line for the description. + if (length $name < 8) { + print TEMPFILE "$name\t\t\t$desc\n"; + } elsif (length $name < 16) { + print TEMPFILE "$name\t\t$desc\n"; + } else { + print TEMPFILE "$name\t$desc\n"; + } + close TEMPFILE; + close NEWSGROUPS; + rename($tempfile, $inn::newsgroups) + or logdie("Cannot rename $tempfile: $!"); + unlink("$inn::locks/LOCK.newsgroups", $tempfile); +} + +# Check the group name. This is partially derived from C News. +# Some checks are commented out if I think they're too strict or +# language-dependent. Your mileage may vary. +sub checkgroupname { + local $_ = shift; + + # whole-name checking + return 'Empty group name' if /^$/; + return 'Whitespace in group name' if /\s/; + return 'Unsafe group name' if /[\`\/:;]/; + return 'Bad dots in group name' if /^\./ or /\.$/ or /\.\./; +# return 'Group name does not begin/end with alphanumeric' +# if (/^[a-zA-Z0-9].+[a-zA-Z0-9]$/; + return 'Group name begins in control., junk. or to.' if /^(?:control|junk|to)\./; +# return 'Group name too long' if length $_ > 128; + + my @components = split(/\./); + # prevent alt.a.b.c.d.e.f.g.w.x.y.z... + return 'Too many components' if $#components > 9; + + # per-component checking + for (my $i = 0; $i <= $#components; $i++) { + local $_ = $components[$i]; + return 'all-numeric name component' if /^[0-9]+$/; +# return 'name component starts with non-alphanumeric' if /^[a-zA-Z0-9]/; +# return 'name component does not contain letter' if not /[a-zA-Z]/; + return "`all' or `ctl' used as name component" if /^(?:all|ctl)$/; +# return 'name component longer than 30 characters' if length $_ > 30; +# return 'uppercase letter(s) in name' if /[A-Z]/; + return 'illegal character(s) in name' if /[^a-z0-9+_\-.]/; + # sigh, c++ etc must be allowed + return 'repeated punctuation in name' if /--|__|\+\+./; +# return 'repeated component(s) in name' if ($i + 2 <= $#components +# and $_ eq $components[$i + 1] and $_ eq $components[$i + 2]); + } + return ''; +} + +1; diff --git a/control/modules/rmgroup.pl b/control/modules/rmgroup.pl new file mode 100644 index 0000000..d78b014 --- /dev/null +++ b/control/modules/rmgroup.pl @@ -0,0 +1,92 @@ +## $Id: rmgroup.pl 7743 2008-04-06 10:04:43Z iulius $ +## +## rmgroup control message handler. +## +## Copyright 2001 by Marco d'Itri +## +## Redistribution and use in source and binary forms, with or without +## modification, are permitted provided that the following conditions +## are met: +## +## 1. Redistributions of source code must retain the above copyright +## notice, this list of conditions and the following disclaimer. +## +## 2. Redistributions in binary form must reproduce the above copyright +## notice, this list of conditions and the following disclaimer in the +## documentation and/or other materials provided with the distribution. + +use strict; + +sub control_rmgroup { + my ($par, $sender, $replyto, $site, $action, $log, $approved, + $headers, $body) = @_; + my ($groupname) = @$par; + + # Scan active to see what sort of change we are making. + open(ACTIVE, $inn::active) or logdie("Cannot open $inn::active: $!"); + my @oldgroup; + while () { + next unless /^(\Q$groupname\E)\s\d+\s\d+\s(\w)/; + @oldgroup = split /\s+/; + last; + } + close ACTIVE; + my $status; + if (not @oldgroup) { + $status = 'no change'; + } elsif (not $approved) { + $status = 'unapproved'; + } else { + $status = 'removed'; + } + + if ($action eq 'mail' and $status !~ /(no change|unapproved)/) { + my $mail = sendmail("rmgroup $groupname $sender"); + print $mail <$tempfile") or logdie("Cannot open $tempfile: $!"); + while () { + print TEMPFILE $_ if not /^\Q$groupname\E\s/; + } + close TEMPFILE; + close NEWSGROUPS; + rename($tempfile, $inn::newsgroups) + or logdie("Cannot rename $tempfile: $!"); + unlink "$inn::locks/LOCK.newsgroups"; + unlink $tempfile; + + logger($log, "rmgroup $groupname $status $sender", $headers, $body) + if $log; + } +} + +1; diff --git a/control/modules/sendme.pl b/control/modules/sendme.pl new file mode 100644 index 0000000..d53ab5a --- /dev/null +++ b/control/modules/sendme.pl @@ -0,0 +1,55 @@ +## $Id: sendme.pl 4932 2001-07-19 00:32:56Z rra $ +## +## sendme control message handler. +## +## Copyright 2001 by Marco d'Itri +## +## Redistribution and use in source and binary forms, with or without +## modification, are permitted provided that the following conditions +## are met: +## +## 1. Redistributions of source code must retain the above copyright +## notice, this list of conditions and the following disclaimer. +## +## 2. Redistributions in binary form must reproduce the above copyright +## notice, this list of conditions and the following disclaimer in the +## documentation and/or other materials provided with the distribution. + +use strict; + +sub control_sendme { + my ($par, $sender, $replyto, $site, $action, $log, $approved, + $headers, $body) = @_; + + if ($action eq 'mail') { + my $mail = sendmail("sendme by $sender"); + print $mail map { s/^~/~~/; "$_\n" } @$body; + close $mail or logdie('Cannot send mail: ' . $!); + } elsif ($action eq 'log') { + if ($log) { + logger($log, "sendme $sender", $headers, $body); + } else { + logmsg("sendme from $sender"); + } + } elsif ($action eq 'doit') { + my $tempfile = "$inn::tmpdir/sendme.$$"; + open(GREPHIST, "|grephistory -s > $tempfile") + or logdie("Cannot run grephistory: $!"); + foreach (@$body) { + print GREPHIST "$_\n"; + } + close GREPHIST or logdie("Cannot run grephistory: $!"); + + if (-s $tempfile and $site =~ /^[a-zA-Z0-9.-_]+$/) { + open(TEMPFILE, $tempfile) or logdie("Cannot open $tempfile: $!"); + open(BATCH, ">>$inn::batch/$site.work") + or logdie("Cannot open $inn::batch/$site.work: $!"); + print BATCH $_ while ; + close BATCH; + close TEMPFILE; + } + unlink $tempfile; + } +} + +1; diff --git a/control/modules/sendsys.pl b/control/modules/sendsys.pl new file mode 100644 index 0000000..6f086ba --- /dev/null +++ b/control/modules/sendsys.pl @@ -0,0 +1,64 @@ +## $Id: sendsys.pl 4932 2001-07-19 00:32:56Z rra $ +## +## sendsys control message handler. +## +## Copyright 2001 by Marco d'Itri +## +## Redistribution and use in source and binary forms, with or without +## modification, are permitted provided that the following conditions +## are met: +## +## 1. Redistributions of source code must retain the above copyright +## notice, this list of conditions and the following disclaimer. +## +## 2. Redistributions in binary form must reproduce the above copyright +## notice, this list of conditions and the following disclaimer in the +## documentation and/or other materials provided with the distribution. + +use strict; + +sub control_sendsys { + my ($par, $sender, $replyto, $site, $action, $log, $approved, + $headers, $body) = @_; + my ($where) = @$par; + + if ($action eq 'mail') { + my $mail = sendmail("sendsys $sender"); + print $mail <; + print $mail "\n"; + close NEWSFEEDS; + close $mail or logdie("Cannot send mail: $!"); + + logger($log, "sendsys $sender to $replyto", $headers, $body) if $log; + } +} + +1; diff --git a/control/modules/senduuname.pl b/control/modules/senduuname.pl new file mode 100644 index 0000000..a2f71e5 --- /dev/null +++ b/control/modules/senduuname.pl @@ -0,0 +1,61 @@ +## $Id: senduuname.pl 4932 2001-07-19 00:32:56Z rra $ +## +## senduuname control message handler. +## +## Copyright 2001 by Marco d'Itri +## +## Redistribution and use in source and binary forms, with or without +## modification, are permitted provided that the following conditions +## are met: +## +## 1. Redistributions of source code must retain the above copyright +## notice, this list of conditions and the following disclaimer. +## +## 2. Redistributions in binary form must reproduce the above copyright +## notice, this list of conditions and the following disclaimer in the +## documentation and/or other materials provided with the distribution. + +use strict; + +sub control_senduuname { + my ($par, $sender, $replyto, $site, $action, $log, $approved, + $headers, $body) = @_; + my ($where) = @$par; + + if ($action eq 'mail') { + my $mail = sendmail("senduuname $sender"); + print $mail <; + close UUNAME or logdie("Cannot run uuname: $!"); + close $mail or logdie("Cannot send mail: $!"); + + logger($log, "senduuname $sender to $replyto", $headers, $body) if $log; + } +} + +1; diff --git a/control/modules/version.pl b/control/modules/version.pl new file mode 100644 index 0000000..f06096f --- /dev/null +++ b/control/modules/version.pl @@ -0,0 +1,61 @@ +## $Id: version.pl 4932 2001-07-19 00:32:56Z rra $ +## +## version control message handler. +## +## Copyright 2001 by Marco d'Itri +## +## Redistribution and use in source and binary forms, with or without +## modification, are permitted provided that the following conditions +## are met: +## +## 1. Redistributions of source code must retain the above copyright +## notice, this list of conditions and the following disclaimer. +## +## 2. Redistributions in binary form must reproduce the above copyright +## notice, this list of conditions and the following disclaimer in the +## documentation and/or other materials provided with the distribution. + +use strict; + +sub control_version { + my ($par, $sender, $replyto, $site, $action, $log, $approved, + $headers, $body) = @_; + my ($where) = @$par; + + my $version = $inn::version || '(unknown version)'; + + if ($action eq 'mail') { + my $mail = sendmail("version $sender"); + print $mail < +# Copyright 2001 by Marco d'Itri +# This program is licensed under the terms of the GNU General Public License. +# +# List of changes: +# +# 2002: Patch by Steven M. Christey for untrusted printf input. +# 2007: Patch by Christoph Biedl for checking a timeout. +# Documentation improved by Jeffrey M. Vinocur (2002), Russ Allbery (2006) +# and Julien Elie (2007). +# +############################################################################## + +require 5.00403; +use strict; + +# XXX FIXME I haven't been able to load it only when installed. +# If nobody can't fix it just ship the program with this line commented. +#use Time::HiRes qw(time); + +my $keyring = $inn::pathetc . '/pgp/ncmring.gpg'; + +# XXX To be moved to a config file. +#sub local_want_cancel_id { +# my ($group, $hdrs) = @_; +# +## Hippo has too many false positives to be useful outside of pr0n groups +# if ($hdrs->{issuer} =~ /(?:Ultra|Spam)Hippo/) { +# foreach (split(/,/, $group)) { +# return 1 if /^alt\.(?:binar|sex)/; +# } +# return 0; +# } +# return 1; +#} + +# no user serviceable parts below this line ################################### + +# global variables +my ($working, $got_sighup, $got_sigterm, @ncmperm, $cancel); +my $use_syslog = 0; +my $log_open = 0; +my $nntp_open = 0; +my $last_cancel = 0; +my $socket_timeout = $inn::peertimeout - 100; + +my $logfile = $inn::pathlog . '/perl-nocem.log'; + +# initialization and main loop ############################################### + +eval { require Sys::Syslog; import Sys::Syslog; $use_syslog = 1; }; + +if ($use_syslog) { + eval "sub Sys::Syslog::_PATH_LOG { '/dev/log' }" if $^O eq 'dec_osf'; + Sys::Syslog::setlogsock('unix') if $^O =~ /linux|dec_osf/; + openlog('nocem', '', $inn::syslog_facility); +} + +if (not $inn::gpgv) { + logmsg('cannot find the gpgv binary', 'err'); + sleep 5; + exit 1; +} + +if ($inn::version and not $inn::version =~ /^INN 2\.[0123]\./) { + $cancel = \&cancel_nntp; +} else { + $cancel = \&cancel_ctlinnd; +} + +$SIG{HUP} = \&hup_handler; +$SIG{INT} = \&term_handler; +$SIG{TERM} = \&term_handler; +$SIG{PIPE} = \&term_handler; + +logmsg('starting up'); + +unless (read_ctlfile()) { + sleep 5; + exit 1; +} + +while () { + chop; + $working = 1; + do_nocem($_); + $working = 0; + term_handler() if $got_sigterm; + hup_handler() if $got_sighup; +} + +logmsg('exiting because of EOF', 'debug'); +exit 0; + +############################################################################## + +# Process one NoCeM notice. +sub do_nocem { + my $token = shift; + my $start = time; + + # open the article and verify the notice + my $artfh = open_article($token); + return if not defined $artfh; + my ($msgid, $nid, $issuer, $nocems) = read_nocem($artfh); + close $artfh; + return unless $nocems; + + &$cancel($nocems); + logmsg("Articles cancelled: " . join(' ', @$nocems), 'debug'); + my $diff = (time - $start) || 0.01; + my $nr = scalar @$nocems; + logmsg(sprintf("processed notice %s by %s (%d ids, %.5f s, %.1f/s)", + $nid, $issuer, $nr, $diff, $nr / $diff)); +} + +# - Check if it is a PGP signed NoCeM notice +# - See if we want it +# - Then check PGP signature +sub read_nocem { + my $artfh = shift; + + # Examine the first 200 lines to see if it is a PGP signed NoCeM. + my $ispgp = 0; + my $isncm = 0; + my $inhdr = 1; + my $i = 0; + my $body = ''; + my ($from, $msgid); + while (<$artfh>) { + last if $i++ > 200; + s/\r\n$/\n/; + if ($inhdr) { + if (/^$/) { + $inhdr = 0; + } elsif (/^From:\s+(.*)\s*$/i) { + $from = $1; + } elsif (/^Message-ID:\s+(<.*>)/i) { + $msgid = $1; + } + } else { + $body .= $_; + $ispgp = 1 if /^-----BEGIN PGP SIGNED MESSAGE-----/; + if (/^\@\@BEGIN NCM HEADERS/) { + $isncm = 1; + last; + } + } + } + + # must be a PGP signed NoCeM. + if (not $ispgp) { + logmsg("Article $msgid: not PGP signed", 'debug'); + return; + } + if (not $isncm) { + logmsg("Article $msgid: not a NoCeM", 'debug'); + return; + } + + # read the headers of this NoCeM, and check if it's supported. + my %hdrs; + while (<$artfh>) { + s/\r\n/\n/; + $body .= $_; + last if /^\@\@BEGIN NCM BODY/; + my ($key, $val) = /^([^:]+)\s*:\s*(.*)$/; + $hdrs{lc $key} = $val; + } + foreach (qw(action issuer notice-id type version)) { + next if $hdrs{$_}; + logmsg("Article $msgid: missing $_ pseudo header", 'debug'); + return; + } + return if not supported_nocem($msgid, \%hdrs); + + # decide if we want it. + if (not want_nocem(\%hdrs)) { + logmsg("Article $msgid: unwanted ($hdrs{issuer}/$hdrs{type})", 'debug'); + return; + } +# XXX want_hier() not implemented +# if ($hdrs{hierarchies} and not want_hier($hdrs{hierarchies})) { +# logmsg("Article $msgid: unwanted hierarchy ($hdrs{hierarchies})", +# 'debug'); +# return; +# } + + # We do want it, so read the entire article. Also copy it to + # a temp file so that we can check the PGP signature when done. + my $tmpfile = "$inn::pathtmp/nocem.$$"; + if (not open(OFD, ">$tmpfile")) { + logmsg("cannot open temp file $tmpfile: $!", 'err'); + return; + } + print OFD $body; + undef $body; + + # process NoCeM body. + my $inbody = 1; + my @nocems; + my ($lastid, $lastgrp); + while (<$artfh>) { + s/\r\n$/\n/; + print OFD; + $inbody = 0 if /^\@\@END NCM BODY/; + next if not $inbody or /^#/; + + my ($id, $grp) = /^(\S*)\s+(\S+)/; + next if not $grp; + if ($id) { + push @nocems, $lastid + if $lastid and want_cancel_id($lastgrp, \%hdrs); + $lastid = $id; + $lastgrp = $grp; + } else { + $lastgrp .= ',' . $grp; + } + } + push @nocems, $lastid if $lastid and want_cancel_id($lastgrp, \%hdrs); + close OFD; + + # at this point we need to verify the PGP signature. + return if not @nocems; + my $e = pgp_check($hdrs{issuer}, $msgid, $tmpfile); + unlink $tmpfile; + return if not $e; + + return ($msgid, $hdrs{'notice-id'}, $hdrs{issuer}, \@nocems); +} + +# XXX not implemented: code to discard notices for groups we don't carry +sub want_cancel_id { + my ($group, $hdrs) = @_; + + return local_want_cancel_id(@_) if defined &local_want_cancel_id; + 1; +} + +# Do we actually want this NoCeM? +sub want_nocem { + my $hdrs = shift; + + foreach (@ncmperm) { + my ($issuer, $type) = split(/\001/); + if ($hdrs->{issuer} =~ /$issuer/i) { + return 1 if '*' eq $type or lc $hdrs->{type} eq $type; + } + } + return 0; +} + +sub supported_nocem { + my ($msgid, $hdrs) = @_; + + if ($hdrs->{version} !~ /^0\.9[0-9]?$/) { + logmsg("Article $msgid: version $hdrs->{version} not supported", + 'debug'); + return 0; + } + if ($hdrs->{action} ne 'hide') { + logmsg("Article $msgid: action $hdrs->{action} not supported", + 'debug'); + return 0; + } + return 1; +} + +# Check the PGP signature on an article. +sub pgp_check { + my ($issuer, $msgid, $art) = @_; + + # fork and spawn a child + my $pid = open(PFD, '-|'); + if (not defined $pid) { + logmsg("pgp_check: cannot fork: $!", 'err'); + return 0; + } + if ($pid == 0) { + open(STDERR, '>&STDOUT'); + exec($inn::gpgv, '--status-fd=1', + $keyring ? '--keyring=' . $keyring : '', $art); + exit 126; + } + + # Read the result and check status code. + local $_ = join('', ); + my $status = 0; + if (not close PFD) { + if ($? >> 8) { + $status = $? >> 8; + } else { + logmsg("Article $msgid: $inn::gpgv killed by signal " . ($? & 255)); + return 0; + } + } +# logmsg("Command line was: $inn::gpgv --status-fd=1" +# . ($keyring ? ' --keyring=' . $keyring : '') . " $art", 'debug'); +# logmsg("Full PGP output: >>>$_<<<", 'debug'); + + if (/^\[GNUPG:\]\s+GOODSIG\s+\S+\s+(.*)/m) { + return 1 if $1 =~ /\Q$issuer\E/; + logmsg("Article $msgid: signed by $1 instead of $issuer"); + } elsif (/^\[GNUPG:\]\s+NO_PUBKEY\s+(\S+)/m) { + logmsg("Article $msgid: $issuer (ID $1) not in keyring"); + } elsif (/^\[GNUPG:\]\s+BADSIG\s+\S+\s+(.*)/m) { + logmsg("Article $msgid: bad signature from $1"); + } elsif (/^\[GNUPG:\]\s+BADARMOR/m or /^\[GNUPG:\]\s+UNEXPECTED/m) { + logmsg("Article $msgid: malformed signature"); + } elsif (/^\[GNUPG:\]\s+ERRSIG\s+(\S+)/m) { + # safety net: we get there if we don't know about some token + logmsg("Article $msgid: unknown error (ID $1)"); + } else { + # some other error we don't know about happened. + # 126 is returned by the child if exec fails. + s/ at \S+ line \d+\.\n$//; s/\n/_/; + logmsg("Article $msgid: $inn::gpgv exited " + . (($status == 126) ? "($_)" : "with status $status"), 'err'); + } + return 0; +} + +# Read article. +sub open_article { + my $token = shift; + + if ($token =~ /^\@.+\@$/) { + my $pid = open(ART, '-|'); + if ($pid < 0) { + logmsg('Cannot fork: ' . $!, 'err'); + return undef; + } + if ($pid == 0) { + exec("$inn::newsbin/sm", '-q', $token) or + logmsg("Cannot exec sm: $!", 'err'); + return undef; + } + return *ART; + } else { + return *ART if open(ART, $token); + logmsg("Cannot open article $token: $!", 'err'); + } + return undef; +} + +# Cancel a number of Message-IDs. We use ctlinnd to do this, +# and we run up to 15 of them at the same time (10 usually). +sub cancel_ctlinnd { + my @ids = @{$_[0]}; + + while (@ids > 0) { + my $max = @ids <= 15 ? @ids : 10; + for (my $i = 1; $i <= $max; $i++) { + my $msgid = shift @ids; + my $pid; + sleep 5 until (defined ($pid = fork)); + if ($pid == 0) { + exec "$inn::pathbin/ctlinnd", '-s', '-t', '180', + 'cancel', $msgid; + exit 126; + } +# logmsg("cancelled: $msgid [$i/$max]", 'debug'); + } + # Now wait for all children. + while ((my $pid = wait) > 0) { + next unless $?; + if ($? >> 8) { + logmsg("Child $pid died with status " . ($? >> 8), 'err'); + } else { + logmsg("Child $pid killed by signal " . ($? & 255), 'err'); + } + } + } +} + +sub cancel_nntp { + my $ids = shift; + my $r; + + if ($nntp_open and time - $socket_timeout > $last_cancel) { + logmsg('Close socket for timeout'); + close (NNTP); + $nntp_open = 0; + } + if (not $nntp_open) { + use Socket; + if (not socket(NNTP, PF_UNIX, SOCK_STREAM, 0)) { + logmsg("socket: $!", 'err'); + goto ERR; + } + if (not connect(NNTP, sockaddr_un($inn::pathrun . '/nntpin'))) { + logmsg("connect: $!", 'err'); + goto ERR; + } + if (($r = ) !~ /^200 /) { + $r =~ s/\r\n$//; + logmsg("bad reply from server: $r", 'err'); + goto ERR; + } + select NNTP; $| = 1; select STDOUT; + print NNTP "MODE CANCEL\r\n"; + if (($r = ) !~ /^284 /) { + $r =~ s/\r\n$//; + logmsg("MODE CANCEL not supported: $r", 'err'); + goto ERR; + } + $nntp_open = 1; + } + foreach (@$ids) { + print NNTP "$_\r\n"; + if (($r = ) !~ /^289/) { + $r =~ s/\r\n$//; + logmsg("cannot cancel $_: $r", 'err'); + goto ERR; + } + } + $last_cancel = time; + return; + +ERR: + # discard unusable socket + close (NNTP); + logmsg('Switching to ctlinnd...', 'err'); + cancel_ctlinnd($ids); + $cancel = \&cancel_ctlinnd; +} + +sub read_ctlfile { + my $permfile = $inn::pathetc . '/nocem.ctl'; + + unless (open(CTLFILE, $permfile)) { + logmsg("Cannot open $permfile: $!", 'err'); + return 0; + } + while () { + chop; + s/^\s+//; s/\s+$//; + next if /^#/ or /^$/; + my ($issuer, $type) = split(/:/, lc $_); + logmsg("Cannot parse nocem.ctl line <<$_>>", 'err') + if not $issuer and $type; + $type =~ s/\s//g; + push @ncmperm, "$issuer\001$_" foreach split(/,/, $type); + } + close CTLFILE; + return 1; +} + +sub logmsg { + my ($msg, $lvl) = @_; + + if (not $use_syslog) { + if ($log_open == 0) { + open(LOG, ">>$logfile") or die "Cannot open log: $!"; + $log_open = 1; + select LOG; $| = 1; select STDOUT; + } + $lvl ||= 'notice'; + print LOG "$lvl: $msg\n"; + return; + } + syslog($lvl || 'notice', '%s', $msg); +} + +sub hup_handler { + $got_sighup = 1; + return if $working; + close LOG; + $log_open = 0; +} + +sub term_handler { + $got_sigterm = 1; + return if $working; + logmsg('exiting because of signal'); + exit 1; +} + +# lint food +print $inn::pathrun.$inn::pathlog.$inn::pathetc.$inn::newsbin.$inn::pathbin + .$inn::pathtmp.$inn::peertimeout.$inn::syslog_facility; + +__END__ + +=head1 NAME + +perl-nocem - A NoCeM-on-spool implementation for S + +=head1 SYNOPSIS + +perl-nocem + +=head1 DESCRIPTION + +NoCeM, which is pronounced I, is a protocol enabling +authenticated third-parties to issue notices which can be used +to cancel unwanted articles (like spam and articles in moderated +newsgroups which were not approved by their moderators). It can +also be used by readers as a I. It is +intended to eventually replace the protocol for third-party cancel +messages. + +B processes third-party, PGP-signed article cancellation +notices. It is possible not to honour all NoCeM notices but only those +which are sent by people whom you trust (that is to say if you trust +the PGP key they use to sign their NoCeM notices). Indeed, it is up +to you to decide whether you wish to honour their notices, depending +on the criteria they use. + +Processing NoCeM notices is easy to set up: + +=over 4 + +=item 1. + +Import the keys of the NoCeM issuers you trust in order to check +the authenticity of their notices. You can do: + + gpg --no-default-keyring --primary-keyring /pgp/ncmring.gpg --import + +where is the value of the I parameter set in F +and the file containing the key(s) to import. The keyring +must be located in I/pgp/ncmring.gpg (create the directory +before using B). For old PGP-generated keys, you may have to use +B<--allow-non-selfsigned-uid> if they are not properly self-signed, +but anyone creating a key really should self-sign the key. Current +PGP implementations do this automatically. + +The keys of NoCeM issuers can be found in the web site of I: +L. You can even +download there a unique file which contains all the keys. + +=item 2. + +Create a F config file in I indicating the NoCeM issuers +and notices you want to follow. This permission file contains lines like: + + annihilator-1:* + clewis@ferret.ocunix:mmf + stephane@asynchrone:mmf,openproxy,spam + +This will remove all articles for which the issuer (first part of the line, +before the colon C<:>) has issued NoCeM notices corresponding to the +criteria specified after the colon. + +You will also find information about that on the web site of +I. + +=item 3. + +Add to the F file an entry like this one in order to feed +B the NoCeM notices posted to alt.nocem.misc and +news.lists.filters: + + nocem!\ + :!*,alt.nocem.misc,news.lists.filters\ + :Tc,Wf,Ap:/perl-nocem + +with the correct path to B, located in . Then, reload +the F file (C). + +Note that you should at least carry news.lists.filters on your news +server (or other newsgroups where NoCeM notices are sent) if you wish +to process them. + +=item 4. + +Everything should now work. However, do not hesitate to manually test +B with a NoCeM notice, using: + + grephistory '' | perl-nocem + +Indeed, B expects tokens on its standard input, and +B can easily give it the token of a known article, +thanks to its Message-ID. + +=back + +When you have verified that everything works, you can eventually turn +off regular spam cancels, if you want, not processing any longer +cancels containing C in the Path: header (see the +I parameter in F). + +=head1 FILES + +=over 4 + +=item I/perl-nocem + +The Perl script itself used to process NoCeM notices. + +=item I/nocem.ctl + +The configuration file which specifies the NoCeM notices to be processed. + +=item I/pgp/ncmring.gpg + +The keyring which contains the public keys of trusted NoCeM issuers. + +=back + +=head1 BUGS + +The Subject: header is not checked for the @@NCM string and there is no +check for the presence of the References: header. + +The Newsgroups: pseudo header is not checked, but this can be done in +local_want_cancel_id(). + +The Hierarchies: header is ignored. + +=head1 HISTORY + +Copyright 2000 by Miquel van Smoorenburg . + +Copyright 2001 by Marco d'Itri . + +$Id: perl-nocem.in 7733 2008-04-06 09:16:20Z iulius $ + +=head1 SEE ALSO + +gpgv(1), grephistory(1), inn.conf(5), newsfeeds(5), pgp(1). + +=cut diff --git a/control/pgpverify.in b/control/pgpverify.in new file mode 100644 index 0000000..feee446 --- /dev/null +++ b/control/pgpverify.in @@ -0,0 +1,876 @@ +#! /usr/bin/perl -w +# do '@LIBDIR@/innshellvars.pl'; +# If running inside INN, uncomment the above and point to innshellvars.pl. +# +# Written April 1996, (David C Lawrence) +# Currently maintained by Russ Allbery +# Version 1.27, 2005-07-02 +# +# NOTICE TO INN MAINTAINERS: The version that is shipped with INN is the +# same as the version that I make available to the rest of the world +# (including non-INN sites), so please make all changes through me. +# +# This program requires Perl 5, probably at least about Perl 5.003 since +# that's when FileHandle was introduced. If you want to use this program +# and your Perl is too old, please contact me (rra@stanford.edu) and tell +# me about it; I want to know what old versions of Perl are still used in +# practice. +# +# Changes from 1.26 -> 1.27 +# -- Default to pubring.gpg when trustedkeys.gpg is not found in the +# default key location, for backward compatibility. +# +# Changes from 1.25 -> 1.26 +# -- Return the correct status code when the message isn't verified +# instead of always returning 255. +# +# Changes from 1.24 -> 1.25 +# -- Fix the -test switch to actually do something. +# -- Improve date generation when logging to standard output. +# +# Changes from 1.23 -> 1.24 +# -- Fix bug in the recognition of wire-format articles. +# +# Changes from 1.15 -> 1.23 +# -- Bump version number to match CVS revision number. +# -- Replaced all signature verification code with code that uses detached +# signatures. Signatures generated by GnuPG couldn't be verified using +# attached signatures without adding a Hash: header, and this was the +# path of least resistance plus avoids munging problems in the future. +# Code taken from PGP::Sign. +# +# Changes from 1.14 -> 1.15 +# -- Added POD documentation. +# -- Fixed the -test switch so that it works again. +# -- Dropped Perl 4 compatibility and reformatted. Now passes use strict. +# +# Changes from 1.13.1 -> 1.14 +# -- Native support for GnuPG without the pgpgpg wrapper, using GnuPG's +# program interface by Marco d'Itri. +# -- Always use Sys::Syslog without any setlogsock call for Perl 5.6.0 or +# later, since Sys::Syslog in those versions of Perl uses the C library +# interface and is now portable. +# -- Default to expecting the key ring in $inn'newsetc/pgp if it exists. +# -- Fix a portability problem for Perl 4 introduced in 1.12. +# +# Changes from 1.13 -> 1.13.1 +# -- Nothing functional, just moved the innshellvars.pl line to the head of +# the script, to accomodate the build process of INN. +# +# Changes from 1.12 -> 1.13 +# -- Use INN's syslog_facility if available. +# +# Changes from 1.11 -> 1.12 +# -- Support for GnuPG. +# -- Use /usr/ucb/logger, if present, instead of /usr/bin/logger (the latter +# of which, on Solaris at least, is some sort of brain damaged POSIX.2 +# command which doesn't use syslog). +# -- Made syslog work for dec_osf (version 4, at least). +# -- Fixed up priority of '.' operator vs bitwise operators. +# +# Changes from 1.10 -> 1.11 +# -- Code to log error messages to syslog. +# See $syslog and $syslog_method configurable variables. +# -- Configurably allow date stamp on stderr error messages. +# -- Added locking for multiple concurrent pgp instances. +# -- More clear error message if pgp exits abnormally. +# -- Identify PGP 5 "BAD signature" string. +# -- Minor diddling for INN (path to innshellvars.pl changed). +# +# Changes from 1.9 -> 1.10 +# -- Minor diddling for INN 2.0: use $inn'pathtmp if it exists, and +# work with the new subst method to find innshellvars.pl. +# -- Do not truncate the tmp file when opening, in case it is really +# linked to another file. +# +# Changes from 1.8 -> 1.9 +# -- Match 'Bad signature' pgp output to return exit status 3 by removing +# '^' in regexp matched on multiline string. +# +# Changes from 1.7 -> 1.8 +# -- Ignore final dot-CRLF if article is in NNTP format. +# +# Changes from 1.6 -> 1.7 +# -- Parse PGP 5.0 'good signature' lines. +# -- Allow -test switch; prints pgp input and output. +# -- Look for pgp in INN's innshellvars.pl. +# -- Changed regexp delimiters for stripping $0 to be compatible with old +# Perl. +# +# Changes from 1.5 -> 1.6 +# -- Handle articles encoded in NNTP format ('.' starting line is doubled, +# \r\n at line end) by stripping NNTP encoding. +# -- Exit 255 with pointer to $HOME or $PGPPATH if pgp can't find key +# ring. (It probably doesn't match the necessary error message with +# ViaCrypt PGP.) +# -- Failures also report Message-ID so the article can be looked up to +# retry. +# +# Changes from 1.4 -> 1.5 +# -- Force English language for 'Good signature from user' by passing +# +language=en on pgp command line, rather than setting the +# environment variable LANGUAGE to 'en'. +# +# Changes from 1.3 -> 1.4 +# -- Now handles wrapped headers that have been unfolded. +# (Though I do believe news software oughtn't be unfolding them.) +# -- Checks to ensure that the temporary file is really a file, and +# not a link or some other weirdness. + +# Path to the GnuPG gpgv binary, if you have GnuPG. If you do, this will +# be used in preference to PGP. For most current control messages, you +# need a version of GnuPG that can handle RSA signatures. If you have INN +# and the script is able to successfully include your innshellvars.pl +# file, the value of $inn::gpgv will override this. +# $gpgv = '/usr/local/bin/gpgv'; + +# Path to pgp binary; for PGP 5.0, set the path to the pgpv binary. If +# you have INN and the script is able to successfully include your +# innshellvars.pl file, the value of $inn::pgp will override this. +$pgp = '/usr/local/bin/pgp'; + +# If you keep your keyring somewhere that is not the default used by pgp, +# uncomment the next line and set appropriately. If you have INN and the +# script is able to successfully include your innshellvars.pl file, this +# will be set to $inn::newsetc/pgp if that directory exists unless you set +# it explicitly. GnuPG will use a file named pubring.gpg in this +# directory. +# $keyring = '/path/to/your/pgp/config'; + +# If you have INN and the script is able to successfully include your +# innshellvars.pl file, the value of $inn::pathtmp and $inn::locks will +# override these. +$tmpdir = "/tmp"; +$lockdir = $tmpdir; + +# How should syslog be accessed? +# +# As it turns out, syslogging is very hard to do portably in versions of +# Perl prior to 5.6.0. Sys::Syslog should work without difficulty in +# 5.6.0 or later and will be used automatically for those versions of Perl +# (unless $syslog_method is ''). For earlier versions of Perl, 'inet' is +# all that's available up to version 5.004_03. If your syslog does not +# accept UDP log packets, such as when syslogd runs with the -l flag, +# 'inet' will not work. A value of 'unix' will try to contact syslogd +# directly over a Unix domain socket built entirely in Perl code (no +# subprocesses). If that is not working for you, and you have the +# 'logger' program on your system, set this variable to its full path name +# to have a subprocess contact syslogd. If the method is just "logger", +# the script will search some known directories for that program. If it +# can't be found & used, everything falls back on stderr logging. +# +# You can test the script's syslogging by running "pgpverify < +# /some/text/file" on a file that is not a valid news article. The +# "non-header at line #" error should be syslogged. +# +# $syslog_method = 'unix'; # Unix doman socket, Perl 5.004_03 or higher. +# $syslog_method = 'inet'; # UDP to port 514 of localhost. +# $syslog_method = ''; # Don't ever try to do syslogging. +$syslog_method = 'logger'; # Search for the logger program. + +# The next two variables are the values to be used for syslog's facility +# and level to use, as would be found in syslog.conf. For various +# reasons, it is impossible to economically have the script figure out how +# to do syslogging correctly on the machine. If you have INN and the +# script is able to successfully include you innshellvars.pl file, then +# the value of $inn::syslog_facility will override this value of +# $syslog_facility; $syslog_level is unaffected. +$syslog_facility = 'news'; +$syslog_level = 'err'; + +# Prepend the error message with a timestamp? This is only relevant if +# not syslogging, when errors go to stderr. +# +# $log_date = 0; # Zero means don't do it. +# $log_date = 1; # Non-zero means do it. +$log_date = -t STDOUT; # Do it if STDOUT is to a terminal. + +# End of configuration section. + + +require 5; + +use strict; +use vars qw($gpgv $pgp $keyring $tmp $tmpdir $lockdir $syslog_method + $syslog_facility $syslog_level $log_date $test $messageid); + +use Fcntl qw(O_WRONLY O_CREAT O_EXCL); +use FileHandle; +use IPC::Open3 qw(open3); +use POSIX qw(strftime); + +# Turn on test mode if the first argument is '-test'. +if (@ARGV && $ARGV[0] eq '-test') { + shift @ARGV; + $test = 1; +} + +# Not syslogged, such an error is almost certainly from someone running +# the script manually. +die "Usage: $0 < message\n" if @ARGV != 0; + +# Grab various defaults from innshellvars.pl if running inside INN. +$pgp = $inn::pgp + if $inn::pgp && $inn::pgp ne "no-pgp-found-during-configure"; +$gpgv = $inn::gpgv if $inn::gpgv; +$tmp = ($inn::pathtmp ? $inn::pathtmp : $tmpdir) . "/pgp$$"; +$lockdir = $inn::locks if $inn::locks; +$syslog_facility = $inn::syslog_facility if $inn::syslog_facility; +if (! $keyring && $inn::newsetc) { + $keyring = $inn::newsetc . '/pgp' if -d $inn::newsetc . '/pgp'; +} + +# Trim /path/to/prog to prog for error messages. +$0 =~ s%^.*/%%; + +# Make sure that the signature verification program can be executed. +if ($gpgv) { + if (! -x $gpgv) { + &fail("$0: $gpgv: " . (-e _ ? "cannot execute" : "no such file") . "\n"); + } +} elsif (! -x $pgp) { + &fail("$0: $pgp: " . (-e _ ? "cannot execute" : "no such file") . "\n"); +} + +# Parse the article headers and generate the PGP message. +my ($nntp_format, $header, $dup) = &parse_header(); +exit 1 unless $$header{'X-PGP-Sig'}; +my ($message, $signature, $version) + = &generate_message($nntp_format, $header, $dup); +if ($test) { + print "-----MESSAGE-----\n$message\n-----END MESSAGE-----\n\n"; + print "-----SIGNATURE-----\n$signature\n-----SIGNATURE-----\n\n"; +} + +# The call to pgp needs to be locked because it tries to both read and +# write a file named randseed.bin but doesn't do its own locking as it +# should, and the consequences of a multiprocess conflict is failure to +# verify. +my $lock; +unless ($gpgv) { + $lock = "$lockdir/LOCK.$0"; + until (&shlock($lock) > 0) { + sleep(2); + } +} + +# Verify the message. +my ($ok, $signer) = pgp_verify($signature, $version, $message); +unless ($gpgv) { + unlink ($lock) or &errmsg("$0: unlink $lock: $!\n"); +} +print "$signer\n" if $signer; +unless ($ok == 0) { + &errmsg("$0: verification failed\n"); +} +exit $ok; + + +# Parse the article headers and return a flag saying whether the message +# is in NNTP format and then two references to hashes. The first hash +# contains all the header/value pairs, and the second contains entries for +# every header that's duplicated. This is, by design, case-sensitive with +# regards to the headers it checks. It's also insistent about the +# colon-space rule. +sub parse_header { + my (%header, %dup, $label, $value, $nntp_format); + while (<>) { + # If the first header line ends with \r\n, this article is in the + # encoding it would be in during an NNTP session. Some article + # storage managers keep them this way for efficiency. + $nntp_format = /\r\n$/ if $. == 1; + s/\r?\n$//; + + last if /^$/; + if (/^(\S+):[ \t](.+)/) { + ($label, $value) = ($1, $2); + $dup{$label} = 1 if $header{$label}; + $header{$label} = $value; + } elsif (/^\s/) { + &fail("$0: non-header at line $.: $_\n") unless $label; + $header{$label} .= "\n$_"; + } else { + &fail("$0: non-header at line $.: $_\n"); + } + } + $messageid = $header{'Message-ID'}; + return ($nntp_format, \%header, \%dup); +} + +# Generate the PGP message to verify. Takes a flag indicating wire +# format, the hash of headers and header duplicates returned by +# parse_header and returns a list of three elements. The first is the +# message to verify, the second is the signature, and the third is the +# version number. +sub generate_message { + my ($nntp_format, $header, $dup) = @_; + + # The regexp below might be too strict about the structure of PGP + # signature lines. + + # The $sep value means the separator between the radix64 signature lines + # can have any amount of spaces or tabs, but must have at least one + # space or tab; if there is a newline then the space or tab has to + # follow the newline. Any number of newlines can appear as long as each + # is followed by at least one space or tab. *phew* + my $sep = "[ \t]*(\n?[ \t]+)+"; + + # Match all of the characters in a radix64 string. + my $r64 = '[a-zA-Z0-9+/]'; + + local $_ = $$header{'X-PGP-Sig'}; + &fail("$0: X-PGP-Sig not in expected format\n") + unless /^(\S+)$sep(\S+)(($sep$r64{64})+$sep$r64+=?=?$sep=$r64{4})$/; + + my ($version, $signed_headers, $signature) = ($1, $3, $4); + $signature =~ s/$sep/\n/g; + $signature =~ s/^\s+//; + + my $message = "X-Signed-Headers: $signed_headers\n"; + my $label; + foreach $label (split(",", $signed_headers)) { + &fail("$0: duplicate signed $label header, can't verify\n") + if $$dup{$label}; + $message .= "$label: "; + $message .= "$$header{$label}" if $$header{$label}; + $message .= "\n"; + } + $message .= "\n"; # end of headers + + while (<>) { # read body lines + if ($nntp_format) { + # Check for end of article; some news servers (eg, Highwind's + # "Breeze") include the dot-CRLF of the NNTP protocol in the article + # data passed to this script. + last if $_ eq ".\r\n"; + + # Remove NNTP encoding. + s/^\.\./\./; + s/\r\n$/\n/; + } + $message .= $_; + } + + # Strip off all trailing whitespaces for compatibility with the way that + # pgpverify used to work, using attached signatures. + $message =~ s/[ \t]+\n/\n/g; + + return ($message, $signature, $version); +} + +# Check a detached signature for given data. Takes a signature block (in +# the form of an ASCII-armored string with embedded newlines), a version +# number (which may be undef), and the message. We return an exit status +# and the key id if the signature is verified. 0 means good signature, 1 +# means bad data, 2 means an unknown signer, and 3 means a bad signature. +# In the event of an error, we report with errmsg. +# +# This code is taken almost verbatim from PGP::Sign except for the code to +# figure out the PGP style. +sub pgp_verify { + my ($signature, $version, $message) = @_; + chomp $signature; + + # Ignore SIGPIPE, since we're going to be talking to PGP. + local $SIG{PIPE} = 'IGNORE'; + + # Set the PGP style based on whether $gpgv is set. + my $pgpstyle = ($gpgv ? 'GPG' : 'PGP2'); + + # Because this is a detached signature, we actually need to save both + # the signature and the data to files and then run PGP on the signature + # file to make it verify the signature. Because this is a detached + # signature, though, we don't have to do any data mangling, which makes + # our lives much easier. It would be nice to do this without having to + # use temporary files, but I don't see any way to do so without running + # into mangling problems. + # + # PGP v5 *requires* there be some subheader or another. *sigh*. So we + # supply one if Version isn't given. :) + my $umask = umask 077; + my $filename = $tmpdir . '/pgp' . time . '.' . $$; + my $sigfile = new FileHandle "$filename.asc", O_WRONLY|O_EXCL|O_CREAT; + unless ($sigfile) { + &errmsg ("Unable to open temp file $filename.asc: $!\n"); + return (255, undef); + } + if ($pgpstyle eq 'PGP2') { + print $sigfile "-----BEGIN PGP MESSAGE-----\n"; + } else { + print $sigfile "-----BEGIN PGP SIGNATURE-----\n"; + } + if (defined $version) { + print $sigfile "Version: $version\n"; + } elsif ($pgpstyle ne 'GPG') { + print $sigfile "Comment: Use GnuPG; it's better :)\n"; + } + print $sigfile "\n", $signature; + if ($pgpstyle eq 'PGP2') { + print $sigfile "\n-----END PGP MESSAGE-----\n"; + } else { + print $sigfile "\n-----END PGP SIGNATURE-----\n"; + } + close $sigfile; + + # Signature saved. Now save the actual message. + my $datafile = new FileHandle "$filename", O_WRONLY|O_EXCL|O_CREAT; + unless ($datafile) { + &errmsg ("Unable to open temp file $filename: $!\n"); + unlink "$filename.asc"; + return (255, undef); + } + print $datafile $message; + close $datafile; + + # Figure out what command line we'll be using. + my @command; + if ($pgpstyle eq 'GPG') { + @command = ($gpgv, qw/--quiet --status-fd=1 --logger-fd=1/); + } else { + @command = ($pgp, '+batchmode', '+language=en'); + } + + # Now, call PGP to check the signature. Because we've written + # everything out to a file, this is actually fairly simple; all we need + # to do is grab stdout. PGP prints its banner information to stderr, so + # just ignore stderr. Set PGPPATH if desired. + # + # For GnuPG, use pubring.gpg if an explicit keyring was configured or + # found. Otherwise, use trustedkeys.gpg in the default keyring location + # if found and non-zero, or fall back on pubring.gpg. This is + # definitely not the logic that I would use if writing this from + # scratch, but it has the most backward compatibility. + local $ENV{PGPPATH} = $keyring if ($keyring && $pgpstyle ne 'GPG'); + if ($pgpstyle eq 'GPG') { + if ($keyring) { + push (@command, "--keyring=$keyring/pubring.gpg"); + } else { + my $home = $ENV{GNUPGHOME} || $ENV{HOME}; + $home .= '/.gnupg' if $home; + if ($home && ! -s "$home/trustedkeys.gpg" && -f "$home/pubring.gpg") { + push (@command, "--keyring=pubring.gpg"); + } + } + } + push (@command, "$filename.asc"); + push (@command, $filename); + my $input = new FileHandle; + my $output = new FileHandle; + my $pid = eval { open3 ($input, $output, $output, @command) }; + if ($@) { + &errmsg ($@); + &errmsg ("Execution of $command[0] failed.\n"); + unlink ($filename, "$filename.asc"); + return (255, undef); + } + close $input; + + # Check for the message that gives us the key status and return the + # appropriate thing to our caller. This part is a zoo due to all of the + # different formats used. GPG has finally done the right thing and + # implemented a separate status stream with parseable data. + # + # MIT PGP 2.6.2 and PGP 6.5.2: + # Good signature from user "Russ Allbery ". + # ViaCrypt PGP 4.0: + # Good signature from user: Russ Allbery + # PGP 5.0: + # Good signature made 1999-02-10 03:29 GMT by key: + # 1024 bits, Key ID 0AFC7476, Created 1999-02-10 + # "Russ Allbery " + # + # Also, PGP v2 prints out "Bad signature" while PGP v5 uses "BAD + # signature", and PGP v6 reverts back to "Bad signature". + local $_; + local $/ = ''; + my $signer; + my $ok = 255; + while (<$output>) { + print if $test; + if ($pgpstyle eq 'GPG') { + if (/\[GNUPG:\]\s+GOODSIG\s+\S+\s+(\S+)/) { + $ok = 0; + $signer = $1; + } elsif (/\[GNUPG:\]\s+NODATA/ || /\[GNUPG:\]\s+UNEXPECTED/) { + $ok = 1; + } elsif (/\[GNUPG:\]\s+NO_PUBKEY/) { + $ok = 2; + } elsif (/\[GNUPG:\]\s+BADSIG\s+/) { + $ok = 3; + } + } else { + if (/^Good signature from user(?::\s+(.*)|\s+\"(.*)\"\.)$/m) { + $signer = $+; + $ok = 0; + last; + } elsif (/^Good signature made .* by key:\n.+\n\s+\"(.*)\"/m) { + $signer = $1; + $ok = 0; + last; + } elsif (/^\S+: Good signature from \"(.*)\"/m) { + $signer = $1; + $ok = 0; + last; + } elsif (/^(?:\S+: )?Bad signature /im) { + $ok = 3; + last; + } + } + } + close $input; + waitpid ($pid, 0); + unlink ($filename, "$filename.asc"); + umask $umask; + return ($ok, $signer || ''); +} + +# Log an error message, attempting syslog first based on $syslog_method +# and falling back on stderr. +sub errmsg { + my ($message) = @_; + $message =~ s/\n$//; + + my $date = ''; + if ($log_date) { + $date = strftime ('%Y-%m-%d %T ', localtime); + } + + if ($syslog_method && $] >= 5.006) { + eval "use Sys::Syslog"; + $syslog_method = 'internal'; + } + + if ($syslog_method eq "logger") { + my @loggers = ('/usr/ucb/logger', '/usr/bin/logger', + '/usr/local/bin/logger'); + my $try; + foreach $try (@loggers) { + if (-x $try) { + $syslog_method = $try; + last; + } + } + $syslog_method = '' if $syslog_method eq 'logger'; + } + + if ($syslog_method ne '' && $syslog_method !~ m%/logger$%) { + eval "use Sys::Syslog"; + } + + if ($@ || $syslog_method eq '') { + warn $date, "$0: trying to use Perl's syslog: $@\n" if $@; + warn $date, $message, "\n"; + warn $date, "... while processing $messageid\n" + if $messageid; + + } else { + $message .= " processing $messageid" + if $messageid; + + if ($syslog_method =~ m%/logger$%) { + unless (system($syslog_method, "-i", "-p", + "$syslog_facility.$syslog_level", $message) == 0) { + if ($? >> 8) { + warn $date, "$0: $syslog_method exited status ", $? >> 8, "\n"; + } else { + warn $date, "$0: $syslog_method died on signal ", $? & 255, "\n"; + } + $syslog_method = ''; + &errmsg($message); + } + + } else { + # setlogsock arrived in Perl 5.004_03 to enable Sys::Syslog to use a + # Unix domain socket to talk to syslogd, which is the only way to do + # it when syslog runs with the -l switch. + if ($syslog_method eq "unix") { + if ($^O eq "dec_osf" && $] >= 5) { + eval 'sub Sys::Syslog::_PATH_LOG { "/dev/log" }'; + } + if ($] <= 5.00403 || ! eval "setlogsock('unix')") { + warn $date, "$0: cannot use syslog_method 'unix' on this system\n"; + $syslog_method = ''; + &errmsg($message); + return; + } + } + + # Unfortunately, there is no way to definitively know in this + # program if the message was logged. I wish there were a way to + # send a message to stderr if and only if the syslog attempt failed. + &openlog($0, 'pid', $syslog_facility); + &syslog($syslog_level, $_[0]); + &closelog(); + } + } +} + +sub fail { + &errmsg($_[0]); + exit 255; +} + +# Get a lock in essentially the same fashion as INN's shlock. return 1 on +# success, 0 for normal failure, -1 for abnormal failure. "normal +# failure" is that a lock is apparently in use by someone else. +sub shlock { + my ($file) = @_; + my ($ltmp, $pid); + + unless (defined(&ENOENT)) { + eval "require POSIX qw(:errno_h)"; + if ($@) { + # values taken from BSD/OS 3.1 + sub ENOENT { 2 } + sub ESRCH { 3 } + sub EEXIST { 17 } + } + } + + $ltmp = ($file =~ m%(.*/)%)[0] . "shlock$$"; + + # This should really attempt to use another temp name. + -e $ltmp && (unlink($ltmp) || return -1); + + open(LTMP, ">$ltmp") || return -1; + print LTMP "$$\n" || (unlink($ltmp), return -1); + close(LTMP) || (unlink($ltmp), return -1); + + if (!link($ltmp, $file)) { + if ($! == &EEXIST) { + if (open(LOCK, "<$file")) { + $pid = ; + if ($pid =~ /^\d+$/ && (kill(0, $pid) == 1 || $! != &ESRCH)) { + unlink($ltmp); + return 0; + } + + # OK, the pid in the lockfile is not a number or no longer exists. + close(LOCK); # silent failure is ok here + + # Unlink failed. + if (unlink($file) != 1 && $! != &ENOENT) { + unlink($ltmp); + return 0; + } + + # Check if open failed for reason other than file no longer present. + } elsif ($! != &ENOENT) { + unlink($ltmp); + return -1; + } + + # Either this process unlinked the lockfile because it was bogus, or + # between this process's link() and open() the other process holding + # the lock unlinked it. This process can now try to acquire. + if (! link($ltmp, $file)) { + unlink($ltmp); + return $! == &EEXIST ? 0 : -1; # Maybe another proc grabbed the lock. + } + + } else { # First attempt to link failed. + unlink($ltmp); + return 0; + } + } + unlink($ltmp); + return 1; +} + +=head1 NAME + +pgpverify - Cryptographically verify Usenet control messages + +=head1 SYNOPSIS + +B [B<-test>] < I + +=head1 DESCRIPTION + +The B program reads (on standard input) a Usenet control +message that has been cryptographically signed using the B +program (or some other program that produces a compatible format). +B then uses a PGP implementation to determine who signed the +control message. If the control message has a valid signature, +B prints (to stdout) the user ID of the key that signed the +message. Otherwise, it exits with a non-zero exit status. + +If B is installed as part of INN, it uses INN's configuration +to determine what signature verification program to use, how to log +errors, what temporary directory to use, and what keyring to use. +Otherwise, all of those parameters can be set by editing the beginning of +this script. + +By default, when running as part of INN, B expects the PGP key +ring to be found in I/pgp (as either F or +F depending on whether PGP or GnuPG is used to verify +signatures). If that directory doesn't exist, it will fall back on using +the default key ring, which is in a F<.pgp> or F<.gnupg> subdirectory of +the running user's home directory. + +INN, when using GnuPG, configures B to use B, which by +default expects keys to be in a keyring named F, since it +doesn't implement trust checking directly. B uses that file if +present but falls back to F if it's not found. This bypasses +the trust model for checking keys, but is compatible with the way that +B used to behave. Of course, if a keyring is found in +I/pgp or configured at the top of the script, that overrides all of +this behavior. + +=head1 OPTIONS + +The B<-test> flag causes B to print out the input that it is +passing to PGP (which is a reconstructed version of the input that +supposedly created the control message) as well as the output from PGP's +analysis of the message. + +=head1 EXIT STATUS + +B may exit with the following statuses: + +=over 4 + +=item 0Z<> + +The control message had a good PGP signature. + +=item 1 + +The control message had no PGP signature. + +=item 2 + +The control message had an unknown PGP signature. + +=item 3 + +The control message had a bad PGP signature. + +=item 255 + +A problem occurred not directly related to PGP analysis of signature. + +=back + +=head1 ENVIRONMENT + +B does not modify or otherwise alter the environment before +invoking the B or B program. It is the responsibility of the +person who installs B to ensure that when B or B +runs, it has the ability to locate and read a PGP key file that contains +the PGP public keys for the appropriate Usenet hierarchy administrators. +B can be pointed to an appropriate key ring by editing +variables at the beginning of this script. + +=head1 NOTES + +Historically, Usenet news server administrators have configured their news +servers to automatically honor Usenet control messages based on the +originator of the control messages and the hierarchies for which the +control messages applied. For example, in the past, David Lawrence always +issued control messages for the S<"Big 8"> hierarchies (comp, humanities, +misc, news, rec, sci, soc, talk). Usenet news administrators would +configure their news server software to automatically honor newgroup and +rmgroup control messages that originated from David Lawrence and applied +to any of the S hierarchies. + +Unfortunately, Usenet news articles (including control messages) are +notoriously easy to forge. Soon, malicious users realized they could +create or remove (at least temporarily) any S newsgroup they wanted by +simply forging an appropriate control message in David Lawrence's name. +As Usenet became more widely used, forgeries became more common. + +The B program was designed to allow Usenet news administrators +to configure their servers to cryptographically verify control messages +before automatically acting on them. Under the B system, a Usenet +hierarchy maintainer creates a PGP public/private key pair and +disseminates the public key. Whenever the hierarchy maintainer issues a +control message, he uses the B program to sign the control +message with the PGP private key. Usenet news administrators configure +their news servers to run the B program on the appropriate +control messages, and take action based on the PGP key User ID that signed +the control message, not the name and address that appear in the control +message's From: or Sender: headers. + +Thus, appropriate use of the B and B programs +essentially eliminates the possibility of malicious users forging Usenet +control messages that sites will act upon, as such users would have to +obtain the PGP private key in order to forge a control message that would +pass the cryptographic verification step. If the hierarchy administrators +properly protect their PGP private keys, the only way a malicious user +could forge a validly-signed control message would be by breaking the +public key encryption algorithm, which (at least at this time) is believed +to be prohibitively difficult for PGP keys of a sufficient bit length. + +=head1 HISTORY + +B was written by David C Lawrence . Manual page +provided by James Ralston. It is currently maintained by Russ Allbery +. + +=head1 COPYRIGHT AND LICENSE + +David Lawrence wrote: "Our lawyer told me to include the following. The +upshot of it is that you can use the software for free as much as you +like." + +Copyright (c) 1996 UUNET Technologies, Inc. +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + +=over 4 + +=item 1. + +Redistributions of source code must retain the above copyright notice, +this list of conditions and the following disclaimer. + +=item 2. + +Redistributions in binary form must reproduce the above copyright notice, +this list of conditions and the following disclaimer in the documentation +and/or other materials provided with the distribution. + +=item 3. + +All advertising materials mentioning features or use of this software must +display the following acknowledgement: + + This product includes software developed by UUNET Technologies, Inc. + +=item 4. + +The name of UUNET Technologies ("UUNET") may not be used to endorse or +promote products derived from this software without specific prior written +permission. + +=back + +THIS SOFTWARE IS PROVIDED BY UUNET "AS IS" AND ANY EXPRESS OR IMPLIED +WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF +MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN +NO EVENT SHALL UUNET BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED +TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +=head1 SEE ALSO + +gpgv(1), pgp(1). + +L is where the most recent versions of +B and B live, along with PGP public keys used for +hierarchy administration. + +=cut + +# Local variables: +# cperl-indent-level: 2 +# fill-column: 74 +# End: diff --git a/control/signcontrol.in b/control/signcontrol.in new file mode 100644 index 0000000..c1951e7 --- /dev/null +++ b/control/signcontrol.in @@ -0,0 +1,600 @@ +#! /usr/bin/perl -w +# written April 1996, tale@isc.org (David C Lawrence) +# Currently maintained by Russ Allbery +# Version 1.8, 2003-07-06 +# +# Changes from 1.6 -> 1.8 +# -- Added support for GnuPG. +# -- Replace signing code with code from PGP::Sign that generates detached +# signatures instead. Otherwise, GnuPG signatures with DSA keys could +# not be verified. Should still work the same as before with RSA keys. +# -- Thanks to new signing code, no longer uses a temporary file. +# -- Only lock when using PGP; GnuPG shouldn't need it. +# +# Changes from 1.5 -> 1.6 +# -- eliminated subprocess use (except pgp, of course). +# -- interlock against competing signing processes. +# -- allow optional headers; see $use_or_add. +# -- added simple comments about why particular headers are signed. +# -- made error messages a tad more helpful for situations when it is hard +# to know what message was trying to be signed (such as via an "at" +# job). +# -- set $action, $group, $moderated to "" to prevent unusued variable +# warnings in the event a Control header can't be parsed. +# -- moved assignment of $pgpend out of loop. +# +# Changes from 1.4 -> 1.5 +# -- need to require Text::Tabs to get 'expand' for tabs in checkgroups. +# +# Changes from 1.3 -> 1.4 +# -- added checkgroups checking. +# -- added group name in several error messages (for help w/batch +# processing). +# -- disabled moderator address checking. +# -- adjusted newsgroups line (ie, tabbing fixed) now correctly +# substituted into control message. +# +# Changes from 1.2.3 -> 1.3 +# -- skip minor pgp signature headers like "charset:" after "version:" +# header and until the empty line that starts the base64 signature block. + +# CONFIGURATION + +# PGP variables. +# +# $pgp can be set to the path to GnuPG to use GnuPG instead. The program +# name needs to end in gpg so that signcontrol knows GnuPG is being used. +# +# STORING YOUR PASS PHRASE IN A FILE IS A POTENTIAL SECURITY HOLE. +# make sure you know what you're doing if you do it. +# if you don't use pgppassfile, you can only use this script interactively. +# if you DO use pgppassfile, it is possible that someone could steal +# your passphrase either by gaining access to the file or by seeing +# the environment of a running pgpverify program. +# +# $pgplock is used because pgp does not guard itself against concurrent +# read/write access to its randseed.bin file. A writable file is needed; +# The default value is to use the .pgp/config.txt file in the home +# directory of the user running the program. Note that this will only +# work to lock against other instances of signcontrol, not all pgp uses. +# $pgplock is not used if $pgp ends in 'gpg' since GnuPG doesn't need +# this. +$pgpsigner = 'INSERT_YOUR_PGP_USERID'; +$pgppassfile = ''; # file with pass phrase for $pgpsigner +$pgp = "/usr/local/bin/pgp"; +$pgpheader = "X-PGP-Sig"; +$pgplock = (getpwuid($<))[7] . '/.pgp/config.txt'; + +# this program is strict about always wanting to be consistent about what +# headers appear in the control messages. the defaults for the +# @... arrays are reasonable, but you should edit the force values. + +# these headers are acceptable in input, but they will be overwritten with +# these values. no sanity checking is done on what you put here. also, +# Subject: is forced to be the Control header prepending by "cmsg". also, +# Newsgroups: is forced to be just the group being added/removed. +# (but is taken as-is for checkgroups) +$force{'Path'} = 'bounce-back'; +$force{'From'} = 'YOUR_ADDRESS_AND_NAME'; +$force{'Approved'} = 'ADDRESS_FOR_Approved_HEADER'; +$force{'X-Info'}='ftp://ftp.isc.org/pub/pgpcontrol/README.html' + . "\n\t" + . 'ftp://ftp.isc.org/pub/pgpcontrol/README'; + +# these headers are acceptable in input, or if not present then will be +# created with the given value. None are enabled by default, because they +# should not be necessary. Setting one to a null string will pass through +# any instance of it found in the input, but not generate one if it is +# missing. If you set any $default{} variables, you must also put it in +# @orderheaders below. +# +# Note that Distribution nearly never works correctly, so use it only if +# you are really sure the propagation of the article will be limited as +# you intend. This normally means that you control all servers the +# distribution will go to with an iron fist. +# +# $use_or_add{'Reply-To'} = 'YOUR_REPLY_ADDRESS'; +# $use_or_add{'Oranization'} = 'YOUR_ORGANIZATION'; +# $use_or_add{'Distribution'} = 'MESSAGE_DISTRIBUTION'; + +# host for message-id; this could be determined automatically based on +# where it is run, but consistency is the goal here +$id_host = 'FULL_HOST_NAME'; + +# headers to sign. Sender is included because non-PGP authentication uses +# it. The following should always be signed: +# Subject -- some older news systems use it to identify the control action. +# Control -- most news systems use this to determine what to do. +# Message-ID -- guards against replay attacks. +# Date -- guards against replay attacks. +# From -- used by news systems as part of authenticating the message. +# Sender -- used by news systems as part of authenticating the message. +@signheaders = ('Subject', 'Control', 'Message-ID', 'Date', 'From', 'Sender'); + +# headers to remove from real headers of final message. +# If it is a signed header, it is signed with an empty value. +# set to () if you do not want any headers removed. +@ignoreheaders = ('Sender'); + +# headers that will appear in final message, and their order of +# appearance. all _must_ be set, either in input or via the $force{} and +# $use_or_add{} variables above. +# (exceptions: Date, Lines, Message-ID are computed by this program) +# if header is in use_or_add with a null value, it will not appear in output. +# several are required by the news article format standard; if you remove +# these, your article will not propagate: +# Path, From, Newsgroups, Subject, Message-ID, Date +# if you take out these, your control message is not very useful: +# Control, Approved +# any headers in @ignoreheaders also in @orderheaders are silently dropped. +# any non-null header in the input but not in @orderheaders or @ignoreheaders +# is an error. +# null headers are silently dropped. +@orderheaders = + ('Path', 'From', 'Newsgroups', 'Subject', 'Control', 'Approved', + 'Message-ID', 'Date', 'Lines', 'X-Info', $pgpheader); + +# this program tries to help you out by not letting you sign erroneous +# names, especially ones that are so erroneous they run afoul of naming +# standards. +# +# set to match only hierarchies you will use it on +# include no '|' for a single hierarchy (eg, "$hierarchies = 'uk';"). + +$hierarchies = 'HIERARCHIES'; + +# the draft news article format standard says: +# "subsequent components SHOULD begin with a letter" +# where "SHOULD" means: +# means that the item is a strong recommendation: there may be +# valid reasons to ignore it in unusual circumstances, but +# this should be done only after careful study of the full +# implications and a firm conclusion that it is necessary, +# because there are serious disadvantages to doing so. +# as opposed to "MUST" which means: +# means that the item is an absolute requirement of the specification +# MUST is preferred, but might not be acceptable if you have legacy +# newsgroups that have name components that begin with a letter, like +# news.announce.newgroups does with comp.sys.3b1 and 17 other groups. + +$start_component_with_letter = 'MUST'; + +## END CONFIGURATION + +use Fcntl qw(F_SETFD); +use FileHandle; +use IPC::Open3 qw(open3); +use POSIX qw(setlocale strftime LC_TIME); +use Text::Tabs; # to get 'expand' for tabs in checkgroups + +$0 =~ s#^.*/##; + +die "Usage: $0 < message\n" if @ARGV > 0; + +umask(0022); # flock needs a writable file, if we create it +if ($pgp !~ /gpg$/) { + open(LOCK, ">>$pgplock") || die "$0: open $lock: $!, exiting\n"; + flock(LOCK, 2); # block until locked +} + +&setgrouppat; + +$die = ''; + +&readhead; +&readbody; + +if ($die) { + if ($group) { + die "$0: ERROR PROCESSING ${action}group $group:\n", $die; + } elsif ($action eq 'check') { + die "$0: ERROR PROCESSING checkgroups:\n", $die; + } elsif ($header{'Subject'}) { + die "$0: ERROR PROCESSING Subject: $header{'Subject'}\n", $die; + } else { + die $die; + } +} + +&signit; + +if ($pgp !~ /gpg$/) { + close(LOCK) || warn "$0: close $lock: $!\n"; +} +exit 0; + +sub +setgrouppat + +{ + my ($hierarchy, $plain_component, $no_component); + my ($must_start_letter, $should_start_letter); + my ($eval); + + # newsgroup name checks based on RFC 1036bis (not including encodings) rules: + # "component MUST contain at least one letter" + # "[component] MUST not contain uppercase letters" + # "[component] MUST begin with a letter or digit" + # "[component] MUST not be longer than 14 characters" + # "sequences 'all' and 'ctl' MUST not be used as components" + # "first component MUST begin with a letter" + # and enforcing "subsequent components SHOULD begin with a letter" as MUST + # and enforcing at least a 2nd level group (can't use to newgroup "general") + # + # DO NOT COPY THIS PATTERN BLINDLY TO OTHER APPLICATIONS! + # It has special construction based on the pattern it is finally used in. + + $plain_component = '[a-z][-+_a-z\d]{0,13}'; + $no_component = '(.*\.)?(all|ctl)(\.|$)'; + $must_start_letter = '(\.' . $plain_component . ')+'; + $should_start_letter = '(\.(?=\d*[a-z])[a-z\d]+[-+_a-z\d]{0,13})+'; + + $grouppat = "(?!$no_component)($hierarchies)"; + if ($start_component_with_letter eq 'SHOULD') { + $grouppat .= $should_start_letter; + } elsif ($start_component_with_letter eq 'MUST') { + $grouppat .= $must_start_letter; + } else { + die "$0: unknown value configured for \$start_component_with_letter\n"; + } + + foreach $hierarchy (split(/\|/, $hierarchies)) { + die "$0: hierarchy name $hierarchy not standards-compliant\n" + if $hierarchy !~ /^$plain_component$/o; + } + + $eval = "\$_ = 'test'; /$grouppat/;"; + eval $eval; + die "$0: bad regexp for matching group names:\n $@" if $@; +} + +sub +readhead + +{ + my($head, $label, $value); + local($_, $/); + + $/ = ""; + $head = ; # get the whole news header + $die .= "$0: continuation lines in headers not allowed\n" + if $head =~ s/\n[ \t]+/ /g; # rejoin continued lines + + for (split(/\n/, $head)) { + if (/^(\S+): (.*)/) { + $label = $1; + $value = $2; + + $die .= "$0: duplicate header $label\n" if $header{$label}; + + $header{$label} = $value; + $header{$label} =~ s/^\s+//; + $header{$label} =~ s/\s+$//; + } elsif (/^$/) { + ; # the empty line separator(s) + } else { + $die .= "$0: non-header line:\n $_\n"; + } + } + + $header{'Message-ID'} = '<' . time . ".$$\@$id_host>"; + + setlocale(LC_TIME, "C"); + $header{'Date'} = strftime("%a, %d %h %Y %T -0000", gmtime); + + for (@ignoreheaders) { + $die .= "ignored header $_ also has forced value set\n" if $force{$_}; + $header{$_} = ''; + } + + for (@orderheaders) { + $header{$_} = $force{$_} if defined($force{$_}); + next if /^(Lines|\Q$pgpheader\E)$/; # these are set later + unless ($header{$_}) { + if (defined($use_or_add{$_})) { + $header{$_} = $use_or_add{$_} if $use_or_add{$_} ne ''; + } else { + $die .= "$0: missing $_ header\n"; + } + } + } + + $action = $group = $moderated = ""; + if ($header{'Control'}) { + if ($header{'Control'} =~ /^(new)group (\S+)( moderated)?$/o || + $header{'Control'} =~ /^(rm)group (\S+)()$/o || + $header{'Control'} =~ /^(check)groups()()$/o) { + ($action, $group, $moderated) = ($1, $2, $3); + $die .= "$0: group name $group is not standards-compliant\n" + if $group !~ /^$grouppat$/ && $action eq 'new'; + $die .= "$0: no group to rmgroup on Control: line\n" + if ! $group && $action eq 'rm'; + $header{'Subject'} = "cmsg $header{'Control'}"; + $header{'Newsgroups'} = $group unless $action eq 'check'; + } else { + $die .= "$0: bad Control format: $header{'Control'}\n"; + } + } else { + $die .= "$0: can't verify message content; missing Control header\n"; + } +} + +sub +readbody + +{ + local($_, $/); + local($status, $ngline, $fixline, $used, $desc, $mods); + + undef $/; + $body = $_ = ; + $header{'Lines'} = $body =~ tr/\n/\n/ if $body; + + # the following tests are based on the structure of a + # news.announce.newgroups newgroup message; even if you comment out the + # "first line" test, please leave the newsgroups line and moderators + # checks + if ($action eq 'new') { + $status = $moderated ? 'a\smoderated' : 'an\sunmoderated'; + $die .= "$0: nonstandard first line in body for $group\n" + if ! /^\Q$group\E\sis\s$status\snewsgroup\b/; + + my $intro = "For your newsgroups file:\n"; + $ngline = + (/^$intro\Q$group\E[ \t]+(.+)\n(\n|\Z(?!\n))/mi)[0]; + if ($ngline) { + $_ = $group; + $desc = $1; + $fixline = $_; + $fixline .= "\t" x ((length) > 23 ? 1 : (4 - ((length) + 1) / 8)); + $used = (length) < 24 ? 24 : (length) + (8 - (length) % 8); + $used--; + $desc =~ s/ \(Moderated\)//i; + $desc =~ s/\s+$//; + $desc =~ s/\w$/$&./; + $die .= "$0: $group description too long\n" if $used + length($desc) > 80; + $fixline .= $desc; + $fixline .= ' (Moderated)' if $moderated; + $body =~ s/^$intro(.+)/$intro$fixline/mi; + } else { + $die .= "$0: $group newsgroup line not formatted correctly\n"; + } + # moderator checks are disabled; some sites were trying to + # automatically maintain aliases based on this, which is bad policy. + if (0 && $moderated) { + $die .= "$0: $group submission address not formatted correctly\n" + if $body !~ /\nGroup submission address: ?\S+@\S+\.\S+\n/m; + $mods = "( |\n[ \t]+)\\([^)]+\\)\n\n"; + $die .= "$0: $group contact address not formatted correctly\n" + if $body !~ /\nModerator contact address: ?\S+@\S+\.\S+$mods/m; + } + } + # rmgroups have freeform bodies + + # checkgroups have structured bodies + if ($action eq 'check') { + for (split(/\n/, $body)) { + my ($group, $description) = /^(\S+)\t+(.+)/; + $die .= "$0: no group:\n $_\n" unless $group; + $die .= "$0: no description:\n $_\n" unless $description; + $die .= "$0: bad group name \"$group\"\n" if $group !~ /^$grouppat$/; + $die .= "$0: tab in description\n" if $description =~ /\t/; + s/ \(Moderated\)$//; + $die .= "$0: $group line too long\n" if length(expand($_)) > 80; + } + } +} + +# Create a detached signature for the given data. The first argument +# should be a key id, the second argument the PGP passphrase (which may be +# null, in which case PGP will prompt for it), and the third argument +# should be the complete message to sign. +# +# In a scalar context, the signature is returned as an ASCII-armored block +# with embedded newlines. In array context, a list consisting of the +# signature and the PGP version number is returned. Returns undef in the +# event of an error, and the error text is then stored in @ERROR. +# +# This function is taken almost verbatim from PGP::Sign except the PGP +# style is determined from the name of the program used. +sub pgp_sign { + my ($keyid, $passphrase, $message) = @_; + + # Ignore SIGPIPE, since we're going to be talking to PGP. + local $SIG{PIPE} = 'IGNORE'; + + # Determine the PGP style. + my $pgpstyle = 'PGP2'; + if ($pgp =~ /pgps$/) { $pgpstyle = 'PGP5' } + elsif ($pgp =~ /gpg$/) { $pgpstyle = 'GPG' } + + # Figure out what command line we'll be using. PGP v6 and PGP v2 use + # compatible syntaxes for what we're trying to do. PGP v5 would have, + # except that the -s option isn't valid when you call pgps. *sigh* + my @command; + if ($pgpstyle eq 'PGP5') { + @command = ($pgp, qw/-baft -u/, $keyid); + } elsif ($pgpstyle eq 'GPG') { + @command = ($pgp, qw/--detach-sign --armor --textmode -u/, $keyid, + qw/--force-v3-sigs --pgp2/); + } else { + @command = ($pgp, qw/-sbaft -u/, $keyid); + } + + # We need to send the password to PGP, but we don't want to use either + # the command line or an environment variable, since both may expose us + # to snoopers on the system. So we create a pipe, stick the password in + # it, and then pass the file descriptor to PGP. PGP wants to know about + # this in an environment variable; GPG uses a command-line flag. + # 5.005_03 started setting close-on-exec on file handles > $^F, so we + # need to clear that here (but ignore errors on platforms where fcntl or + # F_SETFD doesn't exist, if any). + # + # Make sure that the file handles are created outside of the if + # statement, since otherwise they leave scope at the end of the if + # statement and are automatically closed by Perl. + my $passfh = new FileHandle; + my $writefh = new FileHandle; + local $ENV{PGPPASSFD}; + if ($passphrase) { + pipe ($passfh, $writefh); + eval { fcntl ($passfh, F_SETFD, 0) }; + print $writefh $passphrase; + close $writefh; + if ($pgpstyle eq 'GPG') { + push (@command, '--batch', '--passphrase-fd', $passfh->fileno); + } else { + push (@command, '+batchmode'); + $ENV{PGPPASSFD} = $passfh->fileno; + } + } + + # Fork off a pgp process that we're going to be feeding data to, and tell + # it to just generate a signature using the given key id and pass phrase. + my $pgp = new FileHandle; + my $signature = new FileHandle; + my $errors = new FileHandle; + my $pid = eval { open3 ($pgp, $signature, $errors, @command) }; + if ($@) { + @ERROR = ($@, "Execution of $command[0] failed.\n"); + return undef; + } + + # Write the message to the PGP process. Strip all trailing whitespace + # for compatibility with older pgpverify and attached signature + # verification. + $message =~ s/[ \t]+\n/\n/g; + print $pgp $message; + + # All done. Close the pipe to PGP, clean up, and see if we succeeded. + # If not, save the error output and return undef. + close $pgp; + local $/ = "\n"; + my @errors = <$errors>; + my @signature = <$signature>; + close $signature; + close $errors; + close $passfh if $passphrase; + waitpid ($pid, 0); + if ($? != 0) { + @ERROR = (@errors, "$command[0] returned exit status $?\n"); + return undef; + } + + # Now, clean up the returned signature and return it, along with the + # version number if desired. PGP v2 calls this a PGP MESSAGE, whereas + # PGP v5 and v6 and GPG both (more correctly) call it a PGP SIGNATURE, + # so accept either. + while ((shift @signature) !~ /-----BEGIN PGP \S+-----\n/) { + unless (@signature) { + @ERROR = ("No signature from PGP (command not found?)\n"); + return undef; + } + } + my $version; + while ($signature[0] ne "\n" && @signature) { + $version = $1 if ((shift @signature) =~ /^Version:\s+(.*?)\s*$/); + } + shift @signature; + pop @signature; + $signature = join ('', @signature); + chomp $signature; + undef @ERROR; + return wantarray ? ($signature, $version) : $signature; +} + +sub +signit + +{ + my($head, $header, $signheaders, $pgpflags, $pgpbegin, $pgpend); + + # Form the message to be signed. + $signheaders = join(",", @signheaders); + $head = "X-Signed-Headers: $signheaders\n"; + foreach $header (@signheaders) { + $head .= "$header: $header{$header}\n"; + } + my $message = "$head\n$body"; + + # Get the passphrase if available. + my $passphrase; + if ($pgppassfile && -f $pgppassfile) { + $pgppassfile =~ s%^(\s)%./$1%; + if (open (PGPPASS, "< $pgppassfile\0")) { + $passphrase = ; + close PGPPASS; + chomp $passphrase; + } + } + + # Sign the message, getting the signature and PGP version number. + my ($signature, $version) = pgp_sign ($pgpsigner, $passphrase, $message); + unless ($signature) { + die "@ERROR\n$0: could not generate signature\n"; + } + + # GnuPG has version numbers containing spaces, which breaks our header + # format. Find just some portion that contains a digit. + ($version) = ($version =~ /(\S*\d\S*)/); + + # Put the signature into the headers. + $signature =~ s/^/\t/mg; + $header{$pgpheader} = "$version $signheaders\n$signature"; + + for (@ignoreheaders) { + delete $header{$_} if defined $header{$_}; + } + + $head = ''; + foreach $header (@orderheaders) { + $head .= "$header: $header{$header}\n" if $header{$header}; + delete $header{$header}; + } + + foreach $header (keys %header) { + die "$0: unexpected header $header left in header array\n"; + } + + print STDOUT $head; + print STDOUT "\n"; + print STDOUT $body; +} + +# Our lawyer told me to include the following. The upshot of it is that +# you can use the software for free as much as you like. + +# Copyright (c) 1996 UUNET Technologies, Inc. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# 1. Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# 2. Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in the +# documentation and/or other materials provided with the distribution. +# 3. All advertising materials mentioning features or use of this software +# must display the following acknowledgement: +# This product includes software developed by UUNET Technologies, Inc. +# 4. The name of UUNET Technologies ("UUNET") may not be used to endorse or +# promote products derived from this software without specific prior +# written permission. +# +# THIS SOFTWARE IS PROVIDED BY UUNET ``AS IS'' AND ANY EXPRESS OR +# IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE +# ARE DISCLAIMED. IN NO EVENT SHALL UUNET BE LIABLE FOR ANY DIRECT, +# INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, +# STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED +# OF THE POSSIBILITY OF SUCH DAMAGE. + +# Local variables: +# cperl-indent-level: 2 +# fill-column: 74 +# End: diff --git a/debian/changelog b/debian/changelog new file mode 100644 index 0000000..838c48d --- /dev/null +++ b/debian/changelog @@ -0,0 +1,339 @@ +inn2 (2.4.5-5) unstable; urgency=medium + + * Added patches u_*: bug fixes from SVN chosen by the upstream maintainer: + - misc innreport bugs + - incorrect TLS error handling + - correctly initialize the status file IP address variables + - do not send a duplicate reply when TLS negotiation fails + - correct the permissions checking for XHDR and XPAT + - do not send a duplicate reply to XOVER/XHDR/XPAT in a empty group + * Install again our own sasl.conf with the correct paths. + * Document in README.Debian that STARTTLS and MODE READER do not work + together. (Closes: #503495) + * Added patch typo_inn_conf_man fixes a typo in inn.conf(5). + (Closes: #507256) + * Updated the md5.c license in debian/copyright. + + -- Marco d'Itri Mon, 15 Dec 2008 00:50:17 +0100 + +inn2 (2.4.5-4) unstable; urgency=low + + * Backported fixes from SVN: honour the Ad newsfeeds flag and create a + valid SV for the article body which will correctly match regexps. + + -- Marco d'Itri Wed, 10 Sep 2008 01:36:04 +0200 + +inn2 (2.4.5-3) unstable; urgency=medium + + * Do not FTBFS with old versions of find. (Closes: #495508) + + -- Marco d'Itri Thu, 28 Aug 2008 04:21:48 +0200 + +inn2 (2.4.5-2) unstable; urgency=medium + + * Rebuilt with libdb4.6-dev. + + -- Marco d'Itri Sun, 27 Jul 2008 19:23:55 +0200 + +inn2 (2.4.5-1) unstable; urgency=low + + * New upstream STABLE release. + + -- Marco d'Itri Tue, 01 Jul 2008 01:18:29 +0200 + +inn2 (2.4.4r-1) unstable; urgency=low + + * New upstream STABLE release. (For real, this time.) + * On 32 bit architectures, build a new inn2-lfs package with Larges + Files Support enabled. (Closes: #433751) + * Enabled support for Kerberos. (Closes: #478775) + * Rebuilt with perl 5.10. (Closes: #479244) + * Removed usage of debconf. + + -- Marco d'Itri Sun, 11 May 2008 12:31:56 +0200 + +inn2 (2.4.4-1) unstable; urgency=low + + * New upstream STABLE snapshot. + + Rotates innfeed.log. (Closes: #407752) + + Make inews not fail if MODE READER fails because the connection has + not been authenticated yet. (Closes: #475059) + * Removed S from Default-Stop in the init script. (Closes: #471081) + * Updated debconf translation: pt. (Closes: #444720) + * Fixed a typo in the name of debian/inn2.logcheck.violations.ignore. + * Stop overwriting active.times(5) with a symlink, inn now has it. + * Fixed many minor issues pointed out by Julien Élie. (Closes: #455882) + * Removed patches merged upstream: daemonize-ovdb_init, + fix-crash-on-reload, hashfeeds. + * Remove /var/lib/news/ on purge. (Closes: #455104) + + -- Marco d'Itri Mon, 14 Apr 2008 22:01:48 +0200 + +inn2 (2.4.3+20070806-1) unstable; urgency=low + + * New upstream STABLE snapshot. + * Package converted to quilt. + * Added patch fix-crash-on-reload to fix segfaults when reloading + incoming.conf (Closes: #361073, #429802, #430190, #430191) + * Added patch daemonize-ovdb_init to make ovdb_init properly close + stdin/out/err when it becomes a daemon. (Closes: #433522) + * Added patch inndstart-sockets-v6only to suppress a startup warning + about an already opened socket. + * Fixed the bzip2 path in bunbatch. (Closes: #419429) + * Removed patches merged upstream: ckpasswd_use_libdb, fix_radius.conf, + innfeed-fix-getaddrinfo, innfeed-force-ipv4, libdb-4.4. + * Use --as-needed to not link superfluous libraries. + * New debconf translations: pt, nl. (Closes: #414921, #415511) + * Added a logcheck file. (Closes: #405536) + + -- Marco d'Itri Tue, 07 Aug 2007 16:35:06 +0200 + +inn2 (2.4.3-1) unstable; urgency=low + + * New upstream release. (Closes: #381415) + + Fixes nnrpd when "localmaxartsize: 0". (Closes: #357370) + * Removed support for his64v6 and cnfs64, which do not work anyway. + ****** I am looking for a co-maintainer interested in adding ****** + ****** support to build a inn2-lfs package. ****** + * Switched to libdb4.4. + * New debconf translations: vi, cs, sv. (Closes: #314245, #315211, #339811) + * Pre-Depends on debconf-2.0 too. (Closes: #331859) + * Added to innfeed support for a "force-ipv4" configuration option. + Based on a patch contributed by Henning Makholm. (Closes: #336264) + * Added to innfeed support for hashed feeds. + * pgpverify: try harder to find the home directory. (Closes: #307765) + * Moved nnrpd-ssl to the main package. + * Added support for libdb to ckpasswd. (Closes: #380644) + * Use FHS paths in the perl-nocem documentation. (Closes: #365639) + * Create /var/run/news in the init script if it does not exist. + + -- Marco d'Itri Fri, 18 Aug 2006 11:19:21 +0200 + +inn2 (2.4.2-3) unstable; urgency=high + + * Fixed upgrades on systems with a non-default pathdb. (Closes: #306765) + * Added the showtoken program. (Closes: #306837) + + -- Marco d'Itri Sat, 14 May 2005 15:03:56 +0200 + +inn2 (2.4.2-2) unstable; urgency=medium + + * New upstream snapshot (20050407). + * Stop providing the inn package. (Closes: #288659) + * Made postinst continue when makehistory or makedbz fail. (Closes: #292167) + * Switched to libdb4.3. + + -- Marco d'Itri Fri, 8 Apr 2005 14:51:22 +0200 + +inn2 (2.4.2-1) unstable; urgency=low + + * New upstream release. + + Removed patch innreport_nnrpd-ssl. + + Fixed news2mail, CNFS buffers reporting. (Closes: #282664, #276819) + + -- Marco d'Itri Fri, 24 Dec 2004 17:05:33 +0100 + +inn2 (2.4.1+20040820-2) unstable; urgency=medium + + * New upstream snapshot (upstream/patches/20040820-to-20040929.diff). + + make Norbert Tretkowski happy. (Closes: #255324) + + fix inn2-ssl segfaults on ia64. (Closes: #270875) + * Conflict with inn and cnews instead of news-transport-system. + (Closes: #269874) + + -- Marco d'Itri Wed, 29 Sep 2004 17:24:18 +0200 + +inn2 (2.4.1+20040820-1) unstable; urgency=medium + + * New upstream snapshot. + + Fixes headers folding in the overview. (Closes: #190207) + + Fixes headers for articles mailed to moderators. (Closes: #249151) + * Added a default CA file name to sasl.conf. (Closes: #250201) + * New patch innreport_nnrpd-ssl: makes innreport correctly parse the + nnrpd-ssl log entries. (Closes: #250252) + * New debconf translations: de, ja. (Closes: #263030, #251100) + + -- Marco d'Itri Fri, 20 Aug 2004 19:32:20 +0200 + +inn2 (2.4.1+20040403-1) unstable; urgency=medium + + * New upstream snapshot. (Closes: #141750) + * Switched to db4.2. (Closes: #241584) + * Added catalan debconf template. (Closes: #236668) + * Removed the patches fix_bindaddress, default-storage.diff and + fix_reiserfs26.diff because they have been merged upstream. + * Removed the patch libdb41-fix.diff because it's not needed anymore. + + -- Marco d'Itri Sat, 3 Apr 2004 21:00:31 +0200 + +inn2 (2.4.1-2) unstable; urgency=medium + + * Fix bindaddress. (Closes: #183812) + * Fix paths in inn2-ssl. (Closes: #229181) + + -- Marco d'Itri Sat, 24 Jan 2004 17:13:43 +0100 + +inn2 (2.4.1-1) unstable; urgency=high + + * New upstream release. + + Fixes buffer overflow, maybe remotely exploitable. (Closes: #226772) + * Add workaround for 2.6.x reiserfs brokeness. (Closes: #225940) + * Use pgpverify from -CURRENT to add useless DSA support. (Closes: #222634) + * Source package converted to DBS. + + -- Marco d'Itri Thu, 8 Jan 2004 20:30:49 +0100 + +inn2 (2.4.0+20031130-1) unstable; urgency=low + + * New upstream STABLE snapshot. (Closes: #213946) + * Added russian and spanish debconf messages. (Closes: #219235, #220884) + * Replaces: inn2-dev to improve upgrades from woody. (Closes: #217219) + * Added a new his64v6 history method with LFS support. Untested! + (Closes: #215877) + + -- Marco d'Itri Sun, 30 Nov 2003 22:54:02 +0100 + +inn2 (2.4.0+20030912-1) unstable; urgency=low + + * New upstream STABLE snapshot. + * Add again a default storage method to storage.conf. (Closes: #205001) + * Fix the getlist command line in actsyncd. (Closes: #206283) + * Added a new cnfs64 method for large cycbufs. The on disk format is not + compatible with 32-bit cycbufs. The storage tokens are not compatible + with the tokens of a standard inn package built with --enable-largefiles + (but they could be converted, let me know if you want to try this). + This is basically untested and may trash the data you feed it. Please + let me know if this works for you or not. (Closes: #206828) + + -- Marco d'Itri Fri, 12 Sep 2003 14:07:06 +0200 + +inn2 (2.4.0+20030808-1) unstable; urgency=medium + + * New upstream snapshot. + * Fix readers.conf(5) and ckpasswd(8). (Closes: #202098, #202300) + * Fix innupgrade invocation in postinst. (Closes: #202978) + * Misc debconf-related fixes courtesy of Christian Perrier + . (Closes: #200517, #200518) + * Added polish, spanish and french debconf messages. + (Closes: #202155, #201627) + + -- Marco d'Itri Fri, 8 Aug 2003 13:56:23 +0200 + +inn2 (2.4.0-3) unstable; urgency=medium + + * Add db_stop to postinst. + * Fixed inn.conf path in postinst. (Closes: #198578) + + -- Marco d'Itri Wed, 25 Jun 2003 15:15:54 +0200 + +inn2 (2.4.0-2) unstable; urgency=medium + + * Install all headers in /usr/include/inn. (Closes: #198463, #198464) + * Added debconf support, patch by . + + -- Marco d'Itri Mon, 23 Jun 2003 19:19:37 +0200 + +inn2 (2.4.0-1) unstable; urgency=medium + + * New upstream release. (Closes: #182751, #188740, #193967, #194273, #198395) + * send-uucp.pl is now send-uucp. + * Switched from db4.0 to db4.1. + * postinst should not fail if innd cannot start. (Closes: #189966) + * Depend on perlapi-5.8.0. (Closes: #187717, #192411) + * Depend on inn2-inews >= 2.3.999+20030227-1. (Closes: #196137) + * Do not scare admins with wrong postinsg messages. (Closes: #183103) + * Corrected typo in innupgrade. (Closes: #194444) + * Added fr.* to /etc/news/moderators. (Closes: #190202) + + -- Marco d'Itri Fri, 20 Jun 2003 18:39:21 +0200 + +inn2 (2.3.999+20030227-1) unstable; urgency=low + + * New upstream snapshot: + * Fix expireover segfaults. (Closes: #180462, #179898) + * Create /var/log/news/path. (Closes: #180168, #180602) + * Build-Depends on libssl-dev. (Closes: #180662) + * Fix missing feed name in the log. (Closes: #178842, #181740) + * Fix news2mail. (Closes: #181086) + * Fix minor bugs in the init script. (Closes: #180866, #180867) + + -- Marco d'Itri Thu, 27 Feb 2003 19:11:57 +0100 + +inn2 (2.3.999+20030205-2) unstable; urgency=low + + * New upstream snapshot. (Closes: #179294) + * Add a new inn2-ssl package. (Closes: #163672) + * Move wildmat(3) from inn2-dev to inn2. (Closes: #179441) + * Downgraded to extra priority. + Most people do not need a local news server, and definitely not INN 2.x. + * Create /var/{lib,run}/news in postinst. + + -- Marco d'Itri Thu, 6 Feb 2003 15:18:02 +0100 + +inn2 (2.3.999+20030125-3) unstable; urgency=low + + * Fix rnews breakage. (Closes: #178673) + * Remove hardcoded paths of egrep, awk, sed, sort, wget. (Closes: #176749) + + -- Marco d'Itri Tue, 28 Jan 2003 01:48:03 +0100 + +inn2 (2.3.999+20030125-2) unstable; urgency=low + + * Fix broken ctlinnd. (Closes: #178588) + + -- Marco d'Itri Mon, 27 Jan 2003 19:41:03 +0100 + +inn2 (2.3.999+20030125-1) unstable; urgency=low + + * BEWARE: this is a -CURRENT snapshot. If it breaks you keep both pieces! + (Closes: #172212, #174938, #176336). + * Make innreport generate valid HTML. (Closes: #166372) + * Pre-Depends on inn2-inews. (Closes: #166804) + * Update gnu.* data in control.ctl. (Closes: #167581) + * Do not ship rnews suid root! (Closes: #171757) + * Install /usr/share/doc/inn2/INSTALL.gz (Closes: #174493) + + -- Marco d'Itri Thu, 16 Jan 2003 01:12:53 +0100 + +inn2 (2.3.3+20020922-5) unstable; urgency=medium + + * Fixed pathtmp (Closes: #162686). + * Check if the usenet user exists before adding a mail alias + (Closes: #162731). + * Fixed a path in sendinpaths (Closes: #163022). + + -- Marco d'Itri Mon, 7 Oct 2002 20:24:16 +0200 + +inn2 (2.3.3+20020922-4) unstable; urgency=low + + * Applied OVDB fixes, courtesy of Ian Hastie @clara.net (Closes: #162643). + + -- Marco d'Itri Sat, 28 Sep 2002 16:46:57 +0200 + +inn2 (2.3.3+20020922-3) unstable; urgency=low + + * Fixed absolute path in Makefile (Closes: #162538). + + -- Marco d'Itri Fri, 27 Sep 2002 19:12:10 +0200 + +inn2 (2.3.3+20020922-1) unstable; urgency=low + + * New STABLE CVS snapshot (Closes: #128725, #137175, #157808, #159105). + * Made some changes to make INN compile with perl 5.8.0. May be broken. + * Fix inndf to convert the "infinite" inodes of reiserfs to 2^31 - 1 + (Closes: #124101). + * Suggests: gnupg instead of pgp. + * Brand new init script which uses ctlinnd. + * Removed debian changes to use mkstemp. INN uses a private temp + directory anyway. + * Conflicts+Replaces: ninpaths, added the scripts from the inpaths package. + * Do not depend anymore on libdb3-util, which is only needed by OVDB. + * Removed signcontrol. + * Changed control.* and junk groups to status n. + * Added gpgverify script (Closes: #131412). + * Added bunbatch script (Closes: #136860). + * Added /usr/share/doc/inn2/INSTALL.gz (Closes: #156685). + * Added buildinnkeyring script which downloads PGP keys from ftp.isc.org + (Closes: #86989). + + -- Marco d'Itri Sun, 22 Sep 2002 21:05:18 +0200 diff --git a/debian/changelog.old b/debian/changelog.old new file mode 100644 index 0000000..9adc435 --- /dev/null +++ b/debian/changelog.old @@ -0,0 +1,688 @@ +inn2 (2.3.999+20030114-1) unstable; urgency=low + + * BEWARE: this is a -CURRENT snapshot. If it breaks you keep both pieces! + (Closes: #172212, #174938). + * Make innreport generate valid HTML. (Closes: #166372) + * Pre-Depends on inn2-inews. (Closes: #166804) + * Update gnu.* data in control.ctl. (Closes: #167581) + * Do not ship rnews suid root! (Closes: #171757) + * Install /usr/share/doc/inn2/INSTALL.gz (Closes: #174493) + + -- Marco d'Itri Thu, 16 Jan 2003 01:12:53 +0100 + +inn2 (2.3.3+20020922-5) unstable; urgency=medium + + * Fixed pathtmp (Closes: #162686). + * Check if the usenet user exists before adding a mail alias + (Closes: #162731). + * Fixed a path in sendinpaths (Closes: #163022). + + -- Marco d'Itri Mon, 7 Oct 2002 20:24:16 +0200 + +inn2 (2.3.3+20020922-4) unstable; urgency=low + + * Applied OVDB fixes, courtesy of Ian Hastie @clara.net (Closes: #162643). + + -- Marco d'Itri Sat, 28 Sep 2002 16:46:57 +0200 + +inn2 (2.3.3+20020922-3) unstable; urgency=low + + * Fixed absolute path in Makefile (Closes: #162538). + + -- Marco d'Itri Fri, 27 Sep 2002 19:12:10 +0200 + +inn2 (2.3.3+20020922-1) unstable; urgency=low + + * New STABLE CVS snapshot (Closes: #128725, #137175, #157808, #159105). + * Made some changes to make INN compile with perl 5.8.0. May be broken. + * Fix inndf to convert the "infinite" inodes of reiserfs to 2^31 - 1 + (Closes: #124101). + * Suggests: gnupg instead of pgp. + * Brand new init script which uses ctlinnd. + * Removed debian changes to use mkstemp. INN uses a private temp + directory anyway. + * Conflicts+Replaces: ninpaths, added the scripts from the inpaths package. + * Do not depend anymore on libdb3-util, which is only needed by OVDB. + * Removed signcontrol. + * Changed control.* and junk groups to status n. + * Added gpgverify script (Closes: #131412). + * Added bunbatch script (Closes: #136860). + * Added /usr/share/doc/inn2/INSTALL.gz (Closes: #156685). + * Added buildinnkeyring script which downloads PGP keys from ftp.isc.org + (Closes: #86989). + + -- Marco d'Itri Sun, 22 Sep 2002 21:05:18 +0200 + +inn2 (2.3.3-1) unstable; urgency=low + + * new upstream version + * use 'unset' not 'declare -x' GZIP to clear environment in innshellvars, + closes: #136156, #136495, #136557, #142464 + * add a warning to inn.conf comments about avoiding tabs after values, + closes: #112657, #112665 + * modify cron.d to test for presence of programs before running them, + closes: #136563 + * modify init.d to redirect rc.news output to /var/log/news/rc.news so that + inn2 daemonizes properly when run manually with the positive side effect + that the startup messages now comply with Debian policy, + closes: #140794, #116716, #134459 + * deliver more upstream doc files, closes: #141963 + * procps is priority required, but not marked essential, so we need to + depend on it so innwatch can use 'uptime', closes: #146135 + * add a clause to postinst to make sure /var/spool/news has appropriate + owner/group/perms + + -- Bdale Garbee Fri, 24 May 2002 00:49:08 -0600 + +inn2 (2.3.2-3) unstable; urgency=low + + * apply patch from rene@seindal.dk to pullnews.in to keep a missing group + from killing a run, closes: #133571 + * apply patch from falcon@wysocki.lodz.pdi.net to dbprocs.in so that ovdb + will work correctly with libdb3, and add a runtime dependency on + libdb3-util to the inn2 package, closes: #128855 + * add manpage symlinks, closes: #99543, #99578 + * ensure backoff directory exists during postinst, closes: #127050 + * clean up some of the lintian warnings + + -- Bdale Garbee Sat, 16 Feb 2002 15:06:20 -0700 + +inn2 (2.3.2-2) unstable; urgency=low + + * edit provided buffindexed.conf to reflect our path structure + * apply patch to mailpost.in provided by Paul Seelig to prevent message + posting failures by stripping Received lines, closes: #120267 + * add remaining /etc files to conffiles, closes: #110647 + * make sure /var/log/news/OLD is news.news in postinst, closes: #116715 + * slightly tighten permissions on /var/run/news, closes: #117773 + * fix missing quotes around command in init.d,closes: #120105 + * explicitly unexport GZIP in innshellvars before defining it to avoid + clashes with GZIP set in external environment, closes: #120381 + * eliminate the task-news-server binary package + + -- Bdale Garbee Wed, 26 Dec 2001 16:02:44 -0700 + +inn2 (2.3.2-1) unstable; urgency=low + + * new upstream version, closes: #98247, #101601 + * remove authprogs/pwcheck.c and modify authprogs/Makefile in orig.tar.gz + since we don't use it and license is non-DFSG-compliant! Closes: #103477 + * make inn2-inews conflict with cnews, closes: #97662 + * modify news.daily to use tempfile(1) for safe tempfile creation, + closes: #104517 + * improve several instances of unsafe temp file handling using relevant + patches from the Debian security team, closes: #83734 + * patch to fix expireover seg faults from Andrew Stribblehill, closes: #95096 + * make 'server' in inn.conf be 'localhost' by default, closes: #90908 + * add a note in the sample newsfeeds file indicating that nntplink is not + part of INN, closes: #88120 + * depend on awk, closes: #87618 Note, I will *not* change the compress + definition in innshellvars, see README.Debian for details. + * only execute rnews in cron if it exists, so that removing but not purging + inn2 doesn't generate excessive email, closes: #89853 + * postinst forces owner of default log files to be correct, closes: #98490 + * enable support for ovdb, closes: #96612 + * remove references to non-existent newslog(8) man page, clean up wildmat(5) + references, closes: #90993 + * apply patches from Tollef Fog Heen for gnupg use in signcontrol and use + safer temp file handling, closes: #99021, #99242 + * the inn2 package really can't do anything about the way conffiles are + handled by dpkg when moving from inn to inn2. inn and inn2 are distinct + packages to Debian, despite their similar heritage, closes: #97443 + + -- Bdale Garbee Fri, 10 Aug 2001 13:48:38 -0600 + +inn2 (2.3.1-4) unstable; urgency=low + + * make /etc/news/filter/* conffiles, closes: #85315 + * update build-depends, changing perl5 to libperl-dev + + -- Bdale Garbee Tue, 20 Feb 2001 15:30:56 -0700 + +inn2 (2.3.1-3) unstable; urgency=low + + * conflict with current and prior versions of suck, since they use innxmit + in a way that no longer works, resulting in data loss as per bug 83727. + Reassign that bug to suck for implementation of a real solution. + * update the init.d script to use rc.news as upstream intends for start and + stop operations, since it handles the current set of INN daemons better + than our previous attempt to use start-stop-daemon does, closes: #84438 + * tag /etc/news/send-uucp.cf as a conffile, closes: #83282 + + -- Bdale Garbee Mon, 5 Feb 2001 16:04:12 -0700 + +inn2 (2.3.1-2) unstable; urgency=low + + * update send-uucp.pl's idea of where innshellvars.pl is, closes: #83194 + * go back to symlinks instead of moving inews and rnews, closes: #83224 + * have preinst clean up /usr/lib/news/bin/filter dregs, closes: #83515 + * have inn2-inews conflict/replace inn2 prior to 2.3.1 to account for moving + some files, closes: #83622 + + -- Bdale Garbee Tue, 30 Jan 2001 17:32:17 -0700 + +inn2 (2.3.1-1) unstable; urgency=low + + * new upstream release. thank you Marco d'Itri for help with + this update + * revert send-uucp to the upstream version, deliver Perl version as + send-uucp.pl, closes: #81074 + * add manual page for send-uucp.pl written by Mark Brown, closes: #81073 + * in 2.3.0-1, we accidentally shipped active, active.times, and newsgroups + as real files in /var/lib/news. Hack the preinst and postinst to protect + the files in place, and fix the mess. While we're at it, fix a few other + details in the postinst. Closes: #81274 + * update preinst warning and README.Debian to add an explicit pointer to the + NEWS file, which documents the 2.2 to 2.3 changes. Fix a few out of date + items in README.Debian. Closes: #81069 + * fix path of required file in send-uucp.pl, closes: #81075 + * move rnews and dependencies to the inn2-inews package, closes: #81268 + * freshen PGPKEYS file from ftp.isc.org, fixes fr.* key and adds a new one, + closes: #81272 + * pgpverify: use /etc/news/pgp as pgp/gnupg config dir and the syslog + socket instead of logger. (Md) + * innd/cc.c: added perl filter status patch (used by cleanfeed). (Md) + * debian/cron.d: added entry to reload incoming.conf and sample entries + for send-nntp and send-uucp.pl. (Md) closes: #81269 + * debian/init.d: first try to gracefully shut down innd with ctlinnd. (Md) + * Changed /usr/lib/news/bin/filter to /etc/news/filter. (Md) closes: #81273 + * Moved back the whole /etc/news/scripts to the standard location in + /usr/lib/news, none of these files is a conffile. (Md) This improves the + postinst questioning considerably, closes: #81072 + * Fixed permissions of many binaries and config files. (Md) closes: #82002 + * inews is not installed suid (suggested by upstream maintainer). (Md) + * Renamed send-uucp.pl.1 to send-uucp.pl.8. (Md) + * lose the "Recommends: trn | news-reader" on the inn2 package, it's not + particularly useful, and gets in the way for dedicated servers + * drop the "Suggests: inn2-dev" from inn2, the few who need it will find it, + and it confuses new users + * minor patch for actsyncd, closes: #80973 + * inews now supports a -p option to set the port, closes: #22242, #68875 + * touch the /var/lib/news/.news.daily file in the postinst to squelch the + email about it being missing before nightly cron runs begin. closes: #76195 + * suidregister is obsolete. newer dpkg's include 'dpkg-statoverride', which + is a superior solution requiring nothing from the package. closes: #81310 + * configure storage.conf for tradspool configuration by default + * modify configure/configure.in slightly so build hostname isn't embedded + in inn.conf, et al + * don't force alt.test to exist, not all servers want it + * add code to the preinst to clean up the scripts tagged as conffiles that + will still be around from 2.3.0-1. Sigh. + + -- Bdale Garbee Mon, 22 Jan 2001 17:23:45 -0700 + +inn2 (2.3.0-1) unstable; urgency=low + + * new upstream version, closes: #69623 + * sendbatch appears to be fixed now, closes: #69561 + * innreport now appears to use png if gif isn't available, closes: #76169 + * thanks to John Goerzen for help cleaning up this release + * hack around need to have pgp installed at build time, closes: #69745 + * add sanity checks for syslog files in the postinst, closes: #74707 + * move all the scripts in /usr/lib/news that must be conffiles into /etc, + backfilling symlinks. Closes: #57150 + * built against perl-5.6, closes: #80703 + * can't duplicate removal problem, closes: #77419 + * update pgpverify's default notion of where to find pgp, closes: #78989 + * ship the Perl send-uucp from Miquel van Smoorenburg, closes: #77836 + * give inews more reasonable owner/group/perms, closes: #70856 + * add another warning to the preinst since some file format changes defy + reasonable automation across the upgrade from pre-2.3.0 to 2.3.0, and some + manual actions will likely be required. + * as of 2.3.0, innshellvars now codes 'compress' as the path for the compress + program instead of an ugly token reporting that compress wasn't found if + there is no compress available at build time. This will work if the + non-free 'ncompress' package is installed. Since some news sites still + don't use gzip for uucp batches, this is probably the right default. Note + added to the README.Debian file. + Closes: #77030 + + -- Bdale Garbee Thu, 28 Dec 2000 16:17:47 -0700 + +inn2 (2.2.3-3) unstable; urgency=low + + * leave the real inews executable in /usr/lib/news/bin, and symlink to it + from /usr/bin instead of moving it, to reduce breakage. Closes: #68999 + * do the same thing with rnews, for good measure + + -- Bdale Garbee Mon, 14 Aug 2000 02:58:39 -0600 + +inn2 (2.2.3-2) unstable; urgency=medium + + * patch from upstream that fixes remote denial of service, closes: #66638 + * provide /usr/sbin/ctlinnd as a symlink so ctlinnd is in root's path, + closes: #67730 + * update the README.Debian file to explain the situation with 'compress' + and indicate willingness to receive patch suggestions for making inn2 + work better with uucp article transport, closes: #64284, #67629 + + -- Bdale Garbee Sun, 13 Aug 2000 22:39:21 -0600 + +inn2 (2.2.3-1) unstable; urgency=low + + * new upstream release, closes: #67635, #65492, #59345, #64405 + * ensure control.cancel exists since we like usecontrolchan=true, + closes: #57555 + * add some verbage to the README.Debian about anacron, closes: #59664 + * some of the code proposed by Raphael Bossek for the init.d is only + relevant for a 1.X to 2.X upgrade, and the rest could take quite a + while during boot. I therefore don't think this belongs in init.d. + I'm adding the interesting checks to the postinst, closes: #62045 + * provide PGPKEYS and some text about it in the README.Debian file, + closes: #66756 + + -- Bdale Garbee Tue, 25 Jul 2000 01:23:14 -0600 + +inn2 (2.2.2.2000.01.31-4) frozen unstable; urgency=low + + * add code to the postinst that calls 'hostname --fqdn' to make sure we can + determine the FQDN before we try to start the daemon. Not doing this + caused installs to fail on poorly-configured systems. Closes: #64681 + * target frozen since this was tagged important, and could indeed cause an + install or upgrade to fail in some (relatively rare?) cases. + + -- Bdale Garbee Fri, 26 May 2000 21:32:01 -0600 + +inn2 (2.2.2.2000.01.31-3) frozen unstable; urgency=low + + * target frozen since these are release critical + * fix a variety of permission problems including /var/lib/news, + closes: #61077 + * permit world execute of /usr/bin/rnews, closes: #61409 + + -- Bdale Garbee Thu, 6 Apr 2000 23:17:56 -0600 + +inn2 (2.2.2.2000.01.31-2) frozen unstable; urgency=low + + * target frozen since one of these is release critical + * fix owner, group, and permissions of /var/run/news on fresh installs, + closes: #61030 + * minor tweak to default inn.conf so build host isn't the value of pathhost + on new installs, closes: #60779 + * fix owner, group, and permissions of /usr/bin/rnews so that it actually + works, closes: #58964 + + -- Bdale Garbee Fri, 24 Mar 2000 01:01:57 -0700 + +inn2 (2.2.2.2000.01.31-1) frozen unstable; urgency=low + + * target frozen since some of the bug fixes here qualify as release critical + * roll to current stable CVS snapshot to acquire bug fixes (some significant) + since 2.2.2 release, closes: #55581 + * tag many scripts in /usr/lib/news/ as conffiles, so changes aren't lost on + upgrades. This makes particularly good sense given the apparent upstream + attitude that whacking scripts to configure a system is reasonable. Add + lintian overrides since it calls conffiles under /usr errors. + Closes: #55723, #56385 + * have inn2 "provide inn" so that other packages that depend on inn don't + get frustrated with us, closes: #56040 + * add -L to innflags and turn controlchan on in default inn.conf (to match + what we're shipping in default newsfeeds file), closes: #56383, #56384 + * don't remove /usr/lib/news explicitly in postrm, since other packages need + it, closes: #55467 + + -- Bdale Garbee Mon, 31 Jan 2000 23:48:44 -0700 + +inn2 (2.2.2-4) frozen unstable; urgency=low + + * change package names from inn to inn2 as part of Debian INN peace project, + which will reinstate 1.7.2 as 'inn'. Target frozen so we ship both 1.7.2 + and 2.2.2 with potato! + * add suitable conflicts with inn 1.X packages, leaving check for old + versions in preinst along since it does no harm + * add task-news-server to help new installs target inn2 by default + + -- Bdale Garbee Wed, 19 Jan 2000 11:07:16 -0700 + +inn (2.2.2-3) frozen unstable; urgency=low + + * target frozen since this fixes multiple release-critical bugs + * inewsinn needs to provide inews, closes: #55349 + * fix rnews path in all innshellvars flavors, closes: #55307 + * since rnews uses nnrpdpostport, inews should also, closes: #54975 + * allow inews to work when talking to servers that require authentication + for the "mode reader" command, closes: #31145 + * add some more information to README.Debian, and include an explicit + pointer to it from the upgrade check in the preinst + * remove needless leftover example maintainer scripts in debian/Examples + + -- Bdale Garbee Sun, 16 Jan 2000 14:43:12 -0700 + +inn (2.2.2-2) unstable; urgency=low + + * move more config files and related man pages from package inn to inewsinn + so that inews works correctly, closes: #55159 + * flag /etc/cron.d/inn as a conffile + * reviewing / closing bugs in inn reported against prior versions that are + fixed or no longer relevant in 2.2.2 ... + * innwatch startup is cleaner than it used to be, closes: #21586, #32416 + * logging id different than it used to be, closes: #24504 + * expire.ctl doesn't have sequence problem any more, closes: #37737 + * Old hosts.nntp and hosts.nntp.nolimit are merged, closes: #48739 + * nntpsend.ctl no longer specifies the path, closes: #49673 + * startup works fine now, closes: #51944 + * control.ctl template is new, and correct, closes: #54526 + * crosspost and overview directories are correct, closes: #55062 + * history corruption problem should be long since fixed, closes: #11614 + * client timeout is set to 10 minutes in /etc/news/inn.conf file by default, + which seems pretty reasonable, and is easy to change. Closes: #12358 + * the ancient problem with ctlinnd rmgroup appears fixed, closes: #12559 + * the current GetFQDN code appears to be coded to work in more cases than + it once was, closes: #29695 + * we use a cron.d script now, so send-uucp, et al, can be scheduled on any + desired interval. Closes: #43016 + + -- Bdale Garbee Sat, 15 Jan 2000 01:18:17 -0700 + +inn (2.2.2-1) unstable; urgency=low + + * New upstream release. Enough has changed since the 1.7.2 release that + this is repackaged entirely from scratch. + Closes: #25936, #26255, #43546, #52672, #43896, #54609, #54759 + * patch lib/parsedate.y to include "y2k fix" relating to acceptance of + articles with year of 1900. Closes: #53813 + * postinst no longer prompts on upgrades, closes: #26659, #44918, #37888 + * much newer innfeed, now integrated with inn sources, closes: #14326 + * install docs revised, formatted version provided, closes: #43898 + * large warning in preinst about upgrades from prior revisions of Debian + INN package requiring manual intervention. The degree of assistance + will improve in future uploads, but may never be fully automatic. + + -- Bdale Garbee Wed, 22 Dec 1999 02:22:33 -0700 + +inn (1.7.2-12) unstable; urgency=low + + * update to reflect current policy + * inndstart *is* provided setuid root, closes: #51944 + * fix path in nntpsend.ctl.5, closes: #49673 + * if we're upgrading, don't stop to ask user, just use existing config + information, closes: #44918 + * deliver Install.txt instead of Install.ms into the doc directory, + closes: #43898 + + -- Bdale Garbee Sun, 5 Dec 1999 20:46:07 -0700 + +inn (1.7.2-11) unstable; urgency=high + + * patch to inews.c to fix buffer overrun problem from Martin Schulze + + -- Bdale Garbee Mon, 6 Sep 1999 13:35:19 -0600 + +inn (1.7.2-10) unstable; urgency=low + + * rebuild to depend on perl 5.005, closes 41469, 41925, 41943. + * update postinst text to eliminate version bogosity, closes 41585. + * fix sample sendbatch, closes 41596 + * fix source-archive clause in sample newsfeeds file, closes 37862. + * document nntpport, closes 28588. + * fix type of inet_addr to reflect current libc. + + -- Bdale Garbee Mon, 2 Aug 1999 01:22:23 -0600 + +inn (1.7.2-9) unstable; urgency=low + + * fold in Roman Hodek's changes from his 6.1 NMU, closing 38621. This + fixes an ugly i386 dependency in the way inn calls Perl. + * update perl dependency managment to try and cope with new perl policy + + -- Bdale Garbee Sat, 17 Jul 1999 17:13:05 -0600 + +inn (1.7.2-6) unstable; urgency=low + + * new maintainer + * clean up a few lintian complaints + * folding in changes from Christian Kurz that he called -5. We'll call this + -6 even though his changes were not widely distributed, just to avoid any + confusion: + + Removed X-Server-Date-Patch as it's not needed. + default moderation address add to /etc/news/moderators (closes: #24549) + Inn now depends on perl (closes: #27754, #32313) + Added gunbatch for gzipped batches (closes: #29899) + Changed debian/rules so inncheck runs without errors. + Added Perl-Support to Inn (closes: #26254) + Changed the examples + + -- Bdale Garbee Wed, 26 May 1999 15:18:53 -0600 + +inn (1.7.2-4) frozen unstable; urgency=medium + + * Fixes: + #21583: inn: inn must replace inewsinn + #20763: inn sends me `not running' and `now running' each night + #21342: inn: install probs + #21582: inn: incorrect prerm fail-upgrade action + #21584: inn: postinst doesn't know abort-upgrade + #20048: inn: poison and REMEMBER_TRASH patch + #21644: inn: a way to not receive certain groups + * Wrt bug #20763: the ctlinnd timeout in innwatch has been increased + to 300 seconds (5 minutes). Hopefully that is enough.. There is no + good alternative, the fact that INN is slow while renumbering is + a basic design flaw. (Though the abp-scheduler patch might help) + + -- Miquel van Smoorenburg Fri, 22 May 1998 19:52:55 +0200 + +inn (1.7.2-3) frozen unstable; urgency=medium + + * Move moderators from inewsinn to inn. The server should keep the + moderators data, not inews. + * Fix lib/clientactive.c (still yucky, but should work..) + * Include latest pgpverify script, 1.9, and manpage + * Fix security hole (/tmp/pgp$$) in pgpverify script + * Fixes: + #18579: I can't uninstall INN package + #19776: inn.prerm buggy typos bah! + #18724: inn: /etc/init.d/inn contains sed that never terminates + #19206: inn: Crontab modifications not preserved + #20423: inn: error in removing + #20653: inn: Bug in send-uucp.pl, patch included + #20792: INN: Wrong sfnet maintainer + #20436: inn: on line 16 of the prerm script there is "fi" instead of "esac" + + -- Miquel van Smoorenburg Wed, 15 Apr 1998 17:34:23 +0200 + +inn (1.7.2-2) unstable; urgency=low + + * Change over to new crontab -l method + * Fix (pre|post)(inst|rm) scripts in several ways + * Fix inewsinn inn.conf installation + * Set NNRP_DBZINCORE_DELAY to -1 + * Fix lintian warnings + Fixes: + #18120: inn: Inn's crontab file should be a conffile + + -- Miquel van Smoorenburg Thu, 19 Feb 1998 22:46:25 +0100 + +inn (1.7.2-1) unstable; urgency=low + + * New upstream version + * Fix crontab -l | tail +4 + * Fixes bugs: + #15889: /etc/news/inn.conf missing + #16128: manpage uncompressed + #15103: egrep incorrectly searched in /bin by innshellvars* + #14404: /usr/doc/$(PACKAGE)/copyright should not be compressed + + -- Miquel van Smoorenburg Thu, 5 Feb 1998 12:52:14 +0100 + +inn (1.7-1) unstable; urgency=low + + * New upstream version + * Fixed bugs: + #9264: Unresolved dependency report for inn + #9315: inn: /etc/news/innshellvars* add /usr/ucb to the PATH + #9832: INN 1.5.1-1 throttled rmgroup really shreds active file ? + #10196: inn: inews complains about missnig subject header when there is one + #10505: Moderated postings fail + #11042: error in /usr/doc/inn/inn-README + #11057: inn: Confusing/dangerous instructions + #11453: inn: max signature length + #11691: libc6 + #11851: inn: Documentation for send-uucp.pl + #11852: inn: nntpsend looks for wrong config file + #11900: INN creates `local.*' by default + #11948: inn: nntpsend does not works + #12513: inewsinn should insert a linebreak + #13161: inn-makehistory - Bus error + #13425: inn: egrep moved to /bin + #13488: inewsinn: directs user to docs in a package it doesn't require + #13616: /etc/init.d/inn, /etc/news/hosts.nntp.nolimit are not conffiles + #13781: Can't feed by send-uucp.pl with ihave/sendme. + #13831: inn: scanlogs depends on hard coded path for egrep + #13899: inn: inn uses /usr/bin/egrep, grep doesn't provide that any longer + * Added BUFFSET fix + + -- Miquel van Smoorenburg Wed, 22 Oct 1997 14:08:37 +0200 + +inn (1.5.1-5) unstable; urgency=high + + * Fixed sendbatch script (comment in between backtics is illegal) + * libc6 version + + -- Miquel van Smoorenburg Wed, 10 Sep 1997 16:31:37 +0200 + +inn (1.5.1-4) stable unstable; urgency=high + + * Add new security patch (with fixed STRCPY): inn-1.5.1-bufferoverflow.patch4 + * Applied null-pointer.patch from Michael Shields + * Upped SIG_MAXLINES in configdata.h to 8 + * Fix inn-README (perl example). Fixes bug #11042 + * Update inn-README and postinst to fix bug #11057 + * Make ctlinnd addhist work in paused mode, and fail in throttled mode + * Change ID string + + -- Miquel van Smoorenburg Thu, 21 Aug 1997 12:37:48 +0200 + +inn (1.5.1-3) stable unstable; urgency=high + + * Add changelogs to docdir + * innshellvars*: change /usr/ucb -> /usr/sbin (Bug#9315) + * Changed Recommends: pgp to Suggests: (Bug#9264) + * Fix inews to fallback on local moderators file (Bug#10505) + * Fix buffer overruns all over the place + + -- Miquel van Smoorenburg Thu, 24 Jul 1997 18:29:33 +0200 + +inn (1.5.1-2) frozen unstable; urgency=high + + * Added security-patch.05 (mailx tilde exploit) + * inewsinn no longer conflicts: inn so installation should no + longer remove your original inn-1.4 package (and configuration). + Expect some dpkg trouble when upgrading from 1.4unoff4 to 1.5.1-2 though. + * Always create .incoming/.outgoing symlinks for backwards compat. + * Do not change ownerships/modes of existing directories + * Fix ownerships/modes of rnews, innd, inndstart, in.nnrpd + * Fix /etc/init.d/inn to comply with console messages standard + * Fix /usr/lib/news/bin/sendbatch + * Fix scanlogs not to nuke active file if log/news/OLD isn't there + * Console messages are a bit more standard now + * Use start-stop-daemon to kill innwatch in /etc/init.d/inn + * Fixed up inncheck - it almost doesn't complain anymore + + -- Miquel van Smoorenburg Mon, 28 Apr 1997 13:58:16 +0200 + +inn (1.5.1-1) unstable; urgency=low + + * Upgraded to 1.5.1 + * Fixed Bug#6387: expire-with-symlinks problem + * Fixed Bug#6246: inewsinn reconfigures on update + * Moved /var/spool/news,/var/lib/news back into package + * Saves removed conffiles in preinst, restores in postinst + * Set LIKE_PULLERS to DO + * Remove manpage stubs that are now real manpages + * Fix options to sendmail in _PATH_SENDMAIL + * Removed subdirectories from debian/ + * create /var/log/news/OLD in postinst + * Fixed most if not all other outstanding bugs + + -- Miquel van Smoorenburg Wed, 5 Feb 1997 10:58:16 +0100 + +inn (1.5-1) unstable; urgency=low + + * Upgraded to 1.5 + * Undid most patches to 1.4unoff4 because they are in 1.5 proper. + * Added security patch + * Added X-Server-Date: patch + * inn now depends on inewsinn + * Fixed all other outstanding bugs (well, almost). + + -- Miquel van Smoorenburg Tue, 17 Dec 1996 16:56:37 +0100 + +inn (1.4unoff4-2) unstable; urgency=low + + * Added inn-dev package for libinn.a and manpages. + * Increased hash table size in expire.c to 2048 (was 128) + * Moved ctlinnd to /usr/sbin + * Moved to new source packaging scheme + + -- Miquel van Smoorenburg Wed, 06 Oct 1996 15:38:30 +0200 + +INN (1.4unoff4-1) - Miquel van Smoorenburg + + * Took out the Linux 1.2 patches I put in unoff3. + * added the 64 bits patch (for Linux/Alpha) + * There are some other minor patches for Linux/Alpha + * Added "xmode" as alias for "mode" + * Using MMAP and setsockopt() now - NEEDS 1.3 kernel ! + +INN (1.4unoff3-1) - Miquel van Smoorenburg + + * Took inn1.4sec-8 and 1.4unoff3, folded in some Linux and + other patches. + * Changed all makefiles to support a "prefix" variable for install + * Removed the hacks in debian.rules for installation + * Locks are in /var/run/innd + * Rewrote post install script. + +inn (1.4sec-8); priority=MEDIUM + + * postinst configuration completely redone. It now sets up a minimal + local installation for you. + * prerm now exists and shuts the server down. + * init scripts changed to System V scheme. + * Descriptions in control files expanded. + * Package now contains /var/lock/news, and uses /var/log (not /var/adm). + * inewsinn postinst looks at and can write /etc/mailname. + +INN 1.4sec Debian 7 - iwj + +* libinn.a, , inn-sys2nf and inn-feedone installed + (in /usr/lib, /usr/include and /usr/bin). + +INN 1.4sec Debian 6 - iwj + +* innwatch now started by /etc/rc.misc/news. +* inewsinn postinst minor typos fixed. +* Leftover file `t' removed from source and diff distributions. + +INN 1.4sec Debian 5 - iwj + +* Added documentation about making active and history files. +* Added monthly makehistory -ru crontab run. +* Made postinst always do crontab -u news /etc/news/crontab . +* Removed HAVE_UNIX_DOMAIN - AF_UNIX+SOCK_DGRAM still broken in Linux. +* Fixed /usr/lib/news/bin/inncheck to conform to our permissions scheme. +* Added manpage links for makeactive(8), makehistory(8), newsrequeue(8). +* /var/adm/news now part of package. + +INN 1.4sec Debian 4 - iwj + +* Added $|=1 to inewsinn postinst script; a few cosmetic fixes. + +INN 1.4sec Debian 3 - iwj + +* Removed `inet' groups from distrib.pats. +* Put more version number information in ../*.{deb,gz} filenames. +* Added Package_Revision field to `control' file. +* Rationalised debian.rules somewhat, and added `build' stamp file. +* Permissions rationalised. +* Changed /etc/rc.d/rc.news to /etc/rc.misc/news. +* postinst calls Perl as /usr/bin/perl. +* Added this Changelog. + +INN 1.4sec Debian 2 - iwj +* inews moved to /usr/bin; rnews moved to /usr/sbin. +* fixed nntpsend not to use PPID variable (it's a bash builtin). + +INN 1.4sec Debian 1 - iwj +Initial release, completely untested. diff --git a/debian/compat b/debian/compat new file mode 100644 index 0000000..b8626c4 --- /dev/null +++ b/debian/compat @@ -0,0 +1 @@ +4 diff --git a/debian/control b/debian/control new file mode 100644 index 0000000..a3fa12f --- /dev/null +++ b/debian/control @@ -0,0 +1,90 @@ +Source: inn2 +Section: news +Priority: extra +Maintainer: Marco d'Itri +Build-Depends: bison, debhelper (>> 4.1.16), quilt (>= 0.40), groff-base, libperl-dev (>= 5.8.0), libdb4.6-dev, libpam0g-dev, libssl-dev (>= 0.9.7), libkrb5-dev +Standards-Version: 3.8.0 + +Package: inn2 +Architecture: any +Depends: ${shlibs:Depends}, ${misc:Depends}, cron, exim4 | mail-transport-agent, time, procps, perl, ${PERLAPI} +Pre-Depends: inn2-inews (>= 2.3.999+20030227-1) +Suggests: gnupg, wget +Replaces: inn, inewsinn, innfeed, ninpaths, inn2-dev +Provides: news-transport-system +Conflicts: inn2-lfs, cnews, inn, inewsinn, innfeed, ninpaths, suck (<= 4.2.5-2) +Description: 'InterNetNews' news server + This package provides INN 2.x, which is a very complex news server + daemon useful for big sites. The 'inn' package still exists for smaller + sites which do not need the complexity of INN 2.x. + . + The news transport is the part of the system that stores the articles + and the lists of which groups are available and so on, and provides + those articles on request to users. It receives news (either posted + locally or from a newsfeed site), files it, and passes it on to any + downstream sites. Each article is kept for a period of time and then + deleted (this is known as 'expiry'). + . + By default Debian's INN will install in a fairly simple 'local-only' + configuration. + . + In order to make use of the services provided by INN you'll have to + use a user-level newsreader program such as trn. The newsreader is + the program that fetches articles from the server and shows them to + the user, remembering which the user has seen so that they don't get + shown again. It also provides the posting interface for the user. +Homepage: http://www.isc.org/products/INN/ + +Package: inn2-lfs +Architecture: any +Depends: ${shlibs:Depends}, ${misc:Depends}, cron, exim4 | mail-transport-agent, time, procps, perl, ${PERLAPI} +Pre-Depends: inn2-inews (>= 2.3.999+20030227-1) +Suggests: gnupg, wget +Replaces: inn, inewsinn, innfeed, ninpaths, inn2-dev +Provides: news-transport-system, inn2 +Conflicts: inn2, cnews, inn, inewsinn, innfeed, ninpaths, suck (<= 4.2.5-2) +Description: 'InterNetNews' news server (LFS version) + This package provides INN 2.x, which is a very complex news server + daemon useful for big sites. The 'inn' package still exists for smaller + sites which do not need the complexity of INN 2.x. + . + This version of the package is compiled with Large Files Support. + . + The news transport is the part of the system that stores the articles + and the lists of which groups are available and so on, and provides + those articles on request to users. It receives news (either posted + locally or from a newsfeed site), files it, and passes it on to any + downstream sites. Each article is kept for a period of time and then + deleted (this is known as 'expiry'). + . + By default Debian's INN will install in a fairly simple 'local-only' + configuration. + . + In order to make use of the services provided by INN you'll have to + use a user-level newsreader program such as trn. The newsreader is + the program that fetches articles from the server and shows them to + the user, remembering which the user has seen so that they don't get + shown again. It also provides the posting interface for the user. +Homepage: http://www.isc.org/products/INN/ + +Package: inn2-inews +Architecture: any +Depends: ${shlibs:Depends}, ${misc:Depends} +Provides: inews +Conflicts: inewsinn, inn2 (<< 2.3.1), cnews +Replaces: inewsinn, inn2 (<< 2.3.1) +Description: NNTP client news injector, from InterNetNews (INN) + 'inews' is the program that newsreaders call when the user wishes to + post an article; it does a few elementary checks and passes the article + on to the news server for posting. + . + This version is the one from Rich Salz's InterNetNews news transport + system (which is also available as a Debian package). + +Package: inn2-dev +Section: devel +Architecture: any +Conflicts: inn, inn-dev +Description: The libinn.a library, headers and man pages + You will only need this if you are going to compile programs that + require the functions in libinn.a. diff --git a/debian/copyright b/debian/copyright new file mode 100644 index 0000000..762e6a3 --- /dev/null +++ b/debian/copyright @@ -0,0 +1,87 @@ +This package was debianized by Bdale Garbee on +Wed, 8 Dec 1999 16:30:09 -0700 and since 23 Sept 2002 has been +maintained by Marco d'Itri . + +It was downloaded from ftp://ftp.isc.org/isc/inn/ . + + +INN as a whole and all code contained in it not otherwise marked with +different licenses and/or copyrights is covered by the following copyright +and license: + + Copyright (c) 2004, 2005, 2006, 2007, 2008 + by Internet Systems Consortium, Inc. ("ISC") + Copyright (c) 1991, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, + 2002, 2003 by The Internet Software Consortium and Rich Salz + + This code is derived from software contributed to the Internet Software + Consortium by Rich Salz. + + Permission to use, copy, modify, and distribute this software for any + purpose with or without fee is hereby granted, provided that the above + copyright notice and this permission notice appear in all copies. + + THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH + REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY + SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF + OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + +Some specific portions of INN are covered by different licenses. Those +licenses, if present, will be noted prominantly at the top of those source +files. Specifically (but possibly not comprehensively): + + authprogs/smbval/*, backends/send-uucp.in, and control/perl-nocem.in + are under the GNU General Public License. See doc/GPL for a copy of + this license. + + backends/shrinkfile.c, frontends/scanspool.in, lib/concat.c, + lib/hstrerror.c, lib/inet_aton.c, lib/inet_ntoa.c, lib/memcmp.c, + lib/parsedate.y, lib/pread.c, lib/pwrite.c, lib/setenv.c, lib/seteuid.c, + lib/strerror.c, lib/strlcat.c and lib/strlcpy.c are in the public + domain. + + lib/snprintf.c may be used for any purpose as long as the author's + notice remains intact in all source code distributions. + + control/gpgverify.in, control/pgpverify.in and control/signcontrol.in + are under a BSD-style license (with the advertising clause) with UUNET + Technologies, Inc. as the copyright holder. See the end of those files + for details. + + control/controlchan.in and control/modules/*.pl are covered by a + two-clause BSD-style license (no advertising clause). See the + beginning of those files for details. + + lib/strcasecmp.c, lib/strspn.c, and lib/strtok.c are taken from BSD + sources and are covered by the standard BSD license. See those files + for more details. + + lib/md5.c is covered under the standard free MD5 license from RSA Data + Security. See the file for more details. A clarification is also + provided here: . + + "Implementations of these message-digest algorithms, including + implementations derived from the reference C code in RFC-1319, + RFC-1320, and RFC-1321, may be made, used, and sold without + license from RSA for any purpose." + + history/his.c and history/hisv6/hisv6.c are under a license very + similar to the new BSD license (no advertising clause) but with Thus + plc as the copyright holder. See those files for details. + + lib/tst.c, include/inn/tst.h and doc/pod/tst.pod are derived from + and are under the new BSD + license (no advertising clause), but with Peter A. Friend as the + copyright holder. + + tests/runtests.c is covered under a license very similar to the MIT/X + Consortium license (less restrictive than INN's licese). See the + beginning of the file for details. + + +On Debian GNU/Linux systems, the complete text of the GNU General +Public License can be found in `/usr/share/common-licenses/GPL'. + diff --git a/debian/inn2-dev.files b/debian/inn2-dev.files new file mode 100644 index 0000000..229c7f7 --- /dev/null +++ b/debian/inn2-dev.files @@ -0,0 +1,5 @@ +usr/include/inn/ +usr/share/man/man3/ +usr/lib/news/libinn.a +usr/lib/news/libstorage.a +usr/lib/news/libinnhist.a diff --git a/debian/inn2-dev.links b/debian/inn2-dev.links new file mode 100644 index 0000000..00ac8c2 --- /dev/null +++ b/debian/inn2-dev.links @@ -0,0 +1,4 @@ +usr/share/man/man3/dbz.3 usr/share/man/man3/dbzclose.3 +usr/share/man/man3/dbz.3 usr/share/man/man3/dbzinit.3 +usr/share/man/man3/dbz.3 usr/share/man/man3/dbzfetch.3 +usr/share/man/man3/dbz.3 usr/share/man/man3/dbzstore.3 diff --git a/debian/inn2-inews.files b/debian/inn2-inews.files new file mode 100644 index 0000000..c9833a1 --- /dev/null +++ b/debian/inn2-inews.files @@ -0,0 +1,13 @@ +etc/news/distrib.pats +etc/news/inn.conf +etc/news/moderators +etc/news/passwd.nntp +usr/lib/news/bin/inews +usr/lib/news/bin/rnews +usr/lib/news/bin/rnews.libexec +usr/share/man/man1/inews.1 +usr/share/man/man1/rnews.1 +usr/share/man/man5/distrib.pats.5 +usr/share/man/man5/inn.conf.5 +usr/share/man/man5/moderators.5 +usr/share/man/man5/passwd.nntp.5 diff --git a/debian/inn2-inews.links b/debian/inn2-inews.links new file mode 100644 index 0000000..feaeeb3 --- /dev/null +++ b/debian/inn2-inews.links @@ -0,0 +1,2 @@ +usr/lib/news/bin/inews usr/bin/inews +usr/lib/news/bin/rnews usr/bin/rnews diff --git a/debian/inn2.README.Debian b/debian/inn2.README.Debian new file mode 100644 index 0000000..61adc45 --- /dev/null +++ b/debian/inn2.README.Debian @@ -0,0 +1,102 @@ +Some random notes about the Debian INN 2.X package. + +If you are upgrading from a previous version, please review the information +near the top of the NEWS file to learn what has changed, and what you may +need to do to update your system. + +If you plan to use INN at home you should really consider running INN 1.x, +which you can find in the inn package. + +INN 2.X is substantially different in terms of configuration file contents +and filesystem layout than previous versions. The Debian INN package installs +a minimal but functional local-only server configuration. Configuring feeds +to/from other servers, and many other details, is up to you. + +You will want to review the information in /usr/share/doc/inn2 to get started +on configuring the installation for your needs. All of the configuration files +in /etc/news are flagged as 'conffiles' in the packaging system, so your work +should not be overwritten without your permission if/when you upgrade the inn +package in the future. In particular, make sure to update /etc/news/inn.conf +to put in your organization name and related information before you establish +any network connections if you don't want to be embarrassed. + +Also, if you are moving over from INN 1.X, please note that the directory +structure under /var/spool/news has changed. At a minimum, you will need to +move the article database subdirectories from /var/spool/news to +/var/spool/news/articles. The set of directories that belong in +/var/spool/news for 2.2.2 and later are: + + archive articles incoming innfeed outgoing overview + +Anything else is left over from a previous version, and probably should be +moved or removed. + +It has been pointed out that inn2's use of /etc/cron.d/inn2 instead of +separate files in /etc/cron.daily and so forth poses a problem for users of +anacron on boxes that are not run continuously. Since the primary target +for an INN installation is a fully-connected system that might easily need +a variety of cron entries with different intervals, I don't intend to change +this default. However, if you're bothered by this, feel free to change the +cron configuration to suite your needs. + +If you want to use pgpverify (and you do if you're getting a real feed!), +you can use the /usr/lib/news/bin/buildinnkeyring program to download the +keys for some hierarchies from ftp.isc.org and add them to the gnupg +keyring used by pgpverify. +This package does not support the non-free PGP program anymore. + +The program 'compress' is not a part of Debian GNU/Linux due to patent issues +with the algorithm. By default, the innshellvars* files will try to call +'compress' if you try to transport compressed batches over UUCP. This will +work if you install the non-free 'ncompress' package. Since it's non-free, +this might be as unacceptable to you as it is to me! If you know that all of +your neighbors can handle gzip, a better solution might be to edit the +innshellvars* files to use '/bin/gzip -9' for the COMPRESS variable. I do not +intend to change this default to differ from the upstream source. + +Log files in /var/log/news need to be owned by user 'news' for the news +scanlogs tool to be able to rotate them properly. + +If you want to use the ckpasswd program you need to install the libgdbm3 +package. + + +SSL +~~~ +To enable SSL you need to start /usr/lib/news/bin/nnrpd-ssl with the -S +flag from inetd or the command line. +See nnrpd(8) and sasl.conf(5) for details. + +You need a certificate authority (CA) certificate in +/etc/news/nnrpd-ca-cert.pem. You will also need a certificate/key pair, +named /etc/news/nnrpd-cert.pem and /etc/news/nnrpd-key.pem respectively. + +If you do not already have a PKI in place, you can create them with a +command like: + +openssl req -new -x509 -nodes -days 1825 \ + -keyout /etc/news/nnrpd-key.pem -out /etc/news/nnrpd-cert.pem + +The private key must have the correct permissions: + +chown root:news /etc/news/nnrpd-key.pem +chmod 640 /etc/news/nnrpd-key.pem + + +STARTTLS +~~~~~~~~ +STARTTLS support will not work when nnrpd is started by innd using +"MODE READER" unless the nnrpd binary is replaced by nnrpd-ssl (e.g. +by using dpkg-divert(8)). +The upstream maintainer recommends running nnrpd as a standalone process. + + +Large Files Support +~~~~~~~~~~~~~~~~~~~ +On 32 bit architectures, the inn2-lfs package is built. +There is no transition procedure, so if you want to convert an existing +installation (this may or may not be possible depending on your choice +of storage and overview formats) then you are on your own. +When attempting such conversion do not forget that the package will +delete /var/{spool,lib,log}/news/ when removed so they should be renamed. + diff --git a/debian/inn2.cron.d b/debian/inn2.cron.d new file mode 100644 index 0000000..1691131 --- /dev/null +++ b/debian/inn2.cron.d @@ -0,0 +1,37 @@ +SHELL=/bin/sh +PATH=/usr/lib/news/bin:/sbin:/bin:/usr/sbin:/usr/bin + +# Expire old news and overview entries nightly, generate reports. + +15 4 * * * news test -x /usr/lib/news/bin/news.daily && news.daily expireover lowmark delayrm + +# Refresh the cached IP addresses every day. + +2 3 * * * news [ -x /usr/sbin/ctlinnd ] && ctlinnd -t 300 -s reload incoming.conf "flush cache" + +# Every hour, run an rnews -U. This is not only for UUCP sites, but +# also to process queud up articles put there by in.nnrpd in case +# innd wasn't accepting any articles. + +10 * * * * news [ -x /usr/bin/rnews ] && rnews -U + +# Enable this entry to send posted news back to your upstream provider. +# Also edit /etc/news/nntpsend.ctl ! +# Not if you use innfeed, of course. + +#*/15 * * * * news nntpsend + + +# Enable this if you want to send news by uucp to your provider. +# Also edit /etc/news/send-uucp.cf ! + +#22 * * * * news send-uucp.pl + +# NINPATHS ################################################################### +# To enable ninpaths please add this line to /etc/news/newsfeeds: +# inpaths!:*:Tc,WP:/usr/lib/news/bin/ginpaths2 +# +#6 6 * * * news ctlinnd -s -t 60 flush inpaths! +#8 6 1 * * news sendinpaths +# NINPATHS ################################################################### + diff --git a/debian/inn2.docs b/debian/inn2.docs new file mode 100644 index 0000000..8307807 --- /dev/null +++ b/debian/inn2.docs @@ -0,0 +1,10 @@ +CONTRIBUTORS +INSTALL +NEWS +README +doc/checklist +doc/external-auth +doc/history +doc/hook-perl +doc/IPv6-info +doc/compliance-nntp diff --git a/debian/inn2.examples b/debian/inn2.examples new file mode 100644 index 0000000..58759cf --- /dev/null +++ b/debian/inn2.examples @@ -0,0 +1,2 @@ +extra/active +extra/newsgroups diff --git a/debian/inn2.init b/debian/inn2.init new file mode 100644 index 0000000..ae5b687 --- /dev/null +++ b/debian/inn2.init @@ -0,0 +1,63 @@ +#!/bin/sh -e +### BEGIN INIT INFO +# Provides: inn2 +# Required-Start: $local_fs $remote_fs $syslog +# Required-Stop: $local_fs $remote_fs $syslog +# Default-Start: 2 3 4 5 +# Default-Stop: 0 1 6 +# Short-Description: INN news server +# Description: The InterNetNews news server. +### END INIT INFO +# +# Start/stop the news server. +# + +test -f /usr/lib/news/bin/rc.news || exit 0 + +start () { + if [ ! -d /var/run/news ]; then + mkdir -p /var/run/news + chown news:news /var/run/news + chmod 775 /var/run/news + fi + su news -c /usr/lib/news/bin/rc.news > /var/log/news/rc.news 2>&1 + # su news -c '/usr/lib/news/bin/nnrpd -D -c /etc/news/readers-ssl.conf -p 563 -S' +} + +stop () { + su news -c '/usr/lib/news/bin/rc.news stop' >> /var/log/news/rc.news 2>&1 + # start-stop-daemon --stop --name nnrpd --quiet --oknodo +} + +case "$1" in + start) + echo -n "Starting news server: " + start + echo "done." + ;; + stop) + echo -n "Stopping news server: " + stop + echo "done." + ;; + reload|force-reload) + echo -n "Reloading most INN configuration files: " + ctlinnd -t 20 reload '' /etc/init.d/inn2 + ;; + restart) + echo -n "Restarting innd: " + if [ -f /var/run/news/innd.pid ]; then + ctlinnd -t 20 throttle "init script" > /dev/null || true + ctlinnd -t 20 xexec inndstart > /dev/null || start + else + start + fi + echo "done." + ;; + *) + echo "Usage: /etc/init.d/inn start|stop|restart|reload">&2 + exit 1 + ;; +esac + +exit 0 diff --git a/debian/inn2.links b/debian/inn2.links new file mode 100644 index 0000000..e55863e --- /dev/null +++ b/debian/inn2.links @@ -0,0 +1,6 @@ +usr/lib/news/bin/ctlinnd usr/sbin/ctlinnd +usr/lib/news/bin/innstat usr/sbin/innstat +usr/lib/news/bin/send-uucp usr/lib/news/bin/send-uucp.pl +usr/share/man/man3/uwildmat.3 usr/share/man/man3/wildmat.3 +usr/share/man/man8/send-uucp.8 usr/share/man/man8/send-ihave.8 +usr/share/man/man8/send-uucp.8 usr/share/man/man8/send-nntp.8 diff --git a/debian/inn2.logcheck.ignore.server b/debian/inn2.logcheck.ignore.server new file mode 100644 index 0000000..25d80ea --- /dev/null +++ b/debian/inn2.logcheck.ignore.server @@ -0,0 +1,57 @@ +\w{3} [ :0-9]{11} [._[:alnum:]-]+ (rnews|innd|batcher): Reading config from /etc/news/inn\.conf$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ (expire|expireover|ctlinnd|nnrpd)\[[0-9]+\]: Reading config from /etc/news/inn\.conf$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ rnews: offered <[^[:space:]]+> [._[:alnum:]-]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: localhost connected [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [[:alpha:]]$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [[:alpha:]]:$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [[:alpha:]]:[-[:alnum:]]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [[:alpha:]]:Expiring process [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [[:alpha:]]:Flushing log and syslog files$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [[:alpha:]]:/var/log/news/expire\.lowmark$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [._[:alnum:]-]+ flush$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [._[:alnum:]-]+ opened [^[:space:]]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [._[:alnum:]-]+ closed$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [._[:alnum:]-]+:[0-9]+ readclose$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [._[:alnum:]-]+:[0-9]+ inactive [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [._[:alnum:]-]+:[0-9]+ NCmode \"mode stream\" received$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [._[:alnum:]-]+ connected [0-9]+ streaming allowed$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: ME HISstats [0-9]+ hitpos [0-9]+ hitneg [0-9]+ missed [0-9]+ dne$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: ME time [0-9]+ hishave [0-9]+\([0-9]+\) hiswrite [0-9]+\([0-9]+\) hissync [0-9]+\([0-9]+\) idle [0-9]+\([0-9]+\) artclean [0-9]+\([0-9]+\) artwrite [0-9]+\([0-9]+\) artcncl [0-9]+\([0-9]+\) hishave/artcncl [0-9]+\([0-9]+\) his(grep|write)/artcncl [0-9]+\([0-9]+\) artlog/artcncl [0-9]+\([0-9]+\) his(write|grep)/artcncl [0-9]+\([0-9]+\) sitesend [0-9]+\([0-9]+\) overv [0-9]+\([0-9]+\) perl [0-9]+\([0-9]+\) nntpread [0-9]+\([0-9]+\) artparse [0-9]+\([0-9]+\)( artlog/artparse [0-9]+\([0-9]+\))? artlog [0-9]+\([0-9]+\) datamove [0-9]+\([0-9]+\)$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: SERVER (servermode|flushlogs) (running|paused)$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: SERVER paused Flushing log and syslog files$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: SERVER running$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: SERVER paused Expiring process [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ batcher\[[0-9]+\]: batcher [[:alnum:]]+ times user [.0-9]+ system [.0-9]+ elapsed [.0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ batcher\[[0-9]+\]: batcher [[:alnum:]]+ stats batches [0-9]+ articles [0-9]+ bytes [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: Reading access from /etc/news/readers\.conf$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: SERVER perl filtering enabled$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ \([.0-9]+\) connect$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ timeout$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ group [.[:alnum:]+-]+ [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: Auth strategy '[[:alnum:]]+' does not match client\. Removing\.$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ (no_)?match_user [<>_[:alnum:]-]+(@[._[:alnum:]-]+)? [<>,_,\*,\![:alnum:][:punct:]-]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ res <[_[:alnum:]-]+>(@[._[:alnum:]-]+)?$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ time [0-9]+ (hisgrep [0-9]+\([0-9]+\) )?idle [0-9]+\([0-9]+\) (readart [0-9]+\([0-9]+\) )?nntpwrite [0-9]+\([0-9]+\)$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ times user [.0-9]+ system [.0-9]+ idle [.0-9]+ elapsed [.0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ exit articles [0-9]+ groups [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ artstats get [0-9]+ time [0-9]+ size [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ post ok <[[:graph:]]+@[._[:alnum:]-]+>$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ \(unknown\) posttrack ok [[:graph:]]+<[[:graph:]]+@[._[:alnum:]-]+>$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ user [[:alnum:][:punct:]]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ Tracking Disabled \(unknown\)$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ auth authenticator successful, user [[:alnum:][:punct:]]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ auth starting authenticator [[:alnum:][:space:][:punct:]]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [._[:alnum:]-]+ no_access_realm$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ cnfsstat\[[0-9]+\]: Class [[:alnum:]]+ for groups matching \"[^[:space:]]+\" Buffer [[:alnum:]]+, len: [0-9]+ Mbytes, used: [0-9]+\.[0-9]+ Mbytes \([0-9 ]+\.[0-9]%\) [ 0-9]+ cycles$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ send-uucp\[[0-9]+\]: checking site [^[:space:]]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ send-uucp\[[0-9]+\]: no articles for [^[:space:]]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ send-uucp\[[0-9]+\]: Flushing [^[:space:]]+ for site [^[:space:]]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ send-uucp\[[0-9]+\]: batched articles for [^[:space:]]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innfeed\[[0-9]+\]: ME time [0-9]+ idle [0-9]+\([0-9]+\) blstats [0-9]+\([0-9]+\) stsfile [0-9]+\([0-9]+\) newart [0-9]+\([0-9]+\) readart [0-9]+\([0-9]+\) prepart [0-9]+\([0-9]+\) read [0-9]+\([0-9]+\) write [0-9]+\([0-9]+\) cb [0-9]+\([0-9]+\)$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innfeed\[[0-9]+\]: [._[:alnum:]-]+ spooling no active connections$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innfeed\[[0-9]+\]: [._[:alnum:]-]+:[0-9]+ connected$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innfeed\[[0-9]+\]: [._[:alnum:]-]+ remote MODE STREAM$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innfeed\[[0-9]+\]: [._[:alnum:]-]+ (final|checkpoint) seconds [0-9]+ spooled [0-9]+ on_close [0-9]+ sleeping [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innfeed\[[0-9]+\]: [._[:alnum:]-]+ hostChkCxns - maxConnections was [0-9]+ now [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innfeed\[[0-9]+\]: ME articles (active|total) [0-9]+ bytes [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innfeed\[[0-9]+\]: [._[:alnum:]-]+:[0-9]+ cxnsleep connect: Connection refused$ diff --git a/debian/inn2.logcheck.violations.ignore b/debian/inn2.logcheck.violations.ignore new file mode 100644 index 0000000..4dc7ef7 --- /dev/null +++ b/debian/inn2.logcheck.violations.ignore @@ -0,0 +1,10 @@ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: [-[:alnum:].]+:[0-9]+ (closed|checkpoint) seconds [0-9]+ accepted [0-9]+ refused [0-9]+ rejected [0-9]+ duplicate [0-9]+ accepted size [0-9]+ duplicate size [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innd: rejecting\[perl\] <[[:alnum:][:punct:]]+@[.[:alnum:]-]+> [0-9]+ [[:alnum:] ]+( \([._[:alnum:]-]+\))?$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ rnews: rejected [0-9]+ Unwanted (newsgroup|distribution) "[._,[:alnum:]-]+"$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ rnews: rejected [0-9]+ Too old -- "\w{3}, [0-9 ]+ \w{3} [0-9]{4} [0-9:]{8} (\+|-)[0-9]{4}"$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ rnews: rejected [0-9]+ Too old -- "[0-9]+ \w{3} [0-9]{4} [0-9:]{8} ([[:upper:]]+|(\+|-)[0-9]{4})"$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ rnews: rejected [0-9]+ No colon-space in "("|x-no-archive:yes)" header$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ rnews: offered <[^[:space:]]+> [._[:alnum:]-]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: [^[:space:]]+ posts received [0-9]+ rejected [0-9]+$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ nnrpd\[[0-9]+\]: \? reverse lookup for [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3} failed: Unknown host -- using IP address for access$ +^\w{3} [ :0-9]{11} [._[:alnum:]-]+ innfeed\[[0-9]+\]: [._[:alnum:]-]+(:[0-9]+)? (final|global|checkpoint) seconds [0-9]+ offered [0-9]+ accepted [0-9]+ refused [0-9]+ rejected [0-9]+ (missing [0-9]+ )?accsize [0-9]+ rejsize [0-9]+( spooled [0-9]+ (on_close [0-9]+ )?unspooled [0-9]+)?( deferred [0-9]+/[0-9.]+ requeued [0-9]+ queue [0-9.]+/[0-9\:\,]+)?$ diff --git a/debian/inn2.postinst b/debian/inn2.postinst new file mode 100644 index 0000000..40214d0 --- /dev/null +++ b/debian/inn2.postinst @@ -0,0 +1,181 @@ +#!/bin/sh -e + +init_inn_files() { + PATHDB=$(/usr/lib/news/bin/innconfval pathdb) + if [ -z "$PATHDB" ]; then + echo "Cannot determine the database path, aborting." + exit 1 + fi + cd $PATHDB + + local package='inn2' + if [ -e /usr/share/doc/inn2-lfs/ ]; then + package='inn2-lfs' + fi + + for file in active newsgroups; do + if [ ! -f $file ]; then + echo "Installing initial content for $PATHDB/$file" + install -m 644 -o news -g news \ + /usr/share/doc/$package/examples/$file . + fi + done + + if [ ! -f history.dir ]; then + echo -n "Building history database in $PATHDB... " + if ! /usr/lib/news/bin/makehistory; then + echo "failed!" + return + fi + if ! /usr/lib/news/bin/makedbz -i -o -s 300000; then + echo "failed!" + return + fi + chown news:news history* + chmod 664 history* + echo "done." + fi + + if [ ! -f active.times ]; then + touch active.times + chown news:news active.times + chmod 644 active.times + fi + + # squelch initial noise in email if this isn't present + if [ ! -f .news.daily ]; then + touch .news.daily + chown news:news .news.daily + fi + + # make sure typical log files exist, and can be rotated + if [ ! -d /var/log/news ]; then + install -d -m 775 -o news -g news /var/log/news + fi + cd /var/log/news + [ -f news.notice ] || touch news.crit news.err news.notice + chown news:news . OLD path news.crit news.err news.notice + + if [ -x /etc/init.d/inn2 ]; then + update-rc.d inn2 defaults > /dev/null + fi +} + +check_usenet_alias() { + # must have an alias for user usenet, point it to root by default + if [ -f /etc/aliases ] && ! grep -q '^usenet:' /etc/aliases \ + && ! getent passwd usenet; then + echo "Adding alias for pseudo-user usenet to /etc/aliases." + echo "usenet: root" >> /etc/aliases + [ -x /usr/bin/newaliases ] && /usr/bin/newaliases + fi +} + +upgrade_inn_conf() { + cd /etc/news + if [ "$2" ] && dpkg --compare-versions "$2" lt "2.3.999+20030125-1"; then + /usr/lib/news/bin/innupgrade -f inn.conf + fi +} + +rebuild_history_index() { + [ -f /var/lib/news/must-rebuild-history-index ] || return 0 + + cd /var/lib/news + HLINES=$(tail -1 history.dir | awk '{ print $1 }') + [ "$HLINES" ] || HLINES=1000000 + echo "Rebuilding the history index for $HLINES lines, please wait..." + rm history.hash history.index history.dir + su news -c "/usr/lib/news/bin/makedbz -s $HLINES -f history" + + rm /var/lib/news/must-rebuild-history-index +} + +rebuild_overview() { + [ -f /var/lib/news/must-rebuild-overview ] || return 0 + + OVENABLED=$(/usr/lib/news/bin/innconfval enableoverview) + if [ -z "$OVENABLED" ]; then + echo "Cannot determine the overview method used, stopping." + exit 1 + fi + if [ $OVENABLED = no -o $OVENABLED = false ]; then + return 0 + fi + + OVMETHOD=$(/usr/lib/news/bin/innconfval ovmethod) + if [ -z "$OVMETHOD" ]; then + echo "Cannot determine the overview method used, stopping." + exit 1 + elif [ $OVMETHOD = tradindexed -o $OVMETHOD = ovdb ]; then + OVPATH=$(/usr/lib/news/bin/innconfval pathoverview) + if [ -z "$OVPATH" ]; then + echo "Cannot determine the overview path, aborting." + exit 1 + fi + echo "Deleting the old overview database, please wait..." + find $OVPATH -type f -not -name DB_CONFIG -print0 | xargs -0 -r rm -f + elif [ $OVMETHOD = buffindexed ]; then + echo "Deleting the old overview database, please wait..." + awk -F : '/^[0-9]/ { print $2 }' < /etc/news/buffindexed.conf | \ + while read name size; do + dd if=/dev/zero of="$name" bs=1024 count="$size" + done + else + echo "Unknown overview method '$OVMETHOD', aborting." + exit 1 + fi + + echo "Rebuilding the overview database, please wait..." + su news -c "/usr/lib/news/bin/makehistory -F -O -x" + + rm /var/lib/news/must-rebuild-overview +} + +start_innd() { +# make sure we can determine the FQDN, since innd won't launch if we can't +if hostname --fqdn > /dev/null 2>&1; then + invoke-rc.d inn2 start || echo "Could not start INN!" +else +cat < /dev/null || true + ctlinnd -t 20 xexec inndstart > /dev/null \ + || echo "Could not restart INN!" + fi + ;; + + abort-upgrade|abort-remove|abort-deconfigure) + ;; + + *) + echo "postinst called with unknown argument '$1'" >&2 + ;; +esac + +#DEBHELPER# + +exit 0 + diff --git a/debian/inn2.postrm b/debian/inn2.postrm new file mode 100644 index 0000000..53101ec --- /dev/null +++ b/debian/inn2.postrm @@ -0,0 +1,14 @@ +#!/bin/sh -e + +if [ "$1" = "purge" ]; then + update-rc.d inn2 remove >/dev/null + if [ -e /var/lib/news/ ]; then + rm -f /var/lib/news/.news.daily /var/lib/news/active* \ + /var/lib/news/newsgroups /var/lib/news/history* + rmdir --ignore-fail-on-non-empty /var/lib/news/ + fi +fi + +#DEBHELPER# + +exit 0 diff --git a/debian/inn2.preinst b/debian/inn2.preinst new file mode 100644 index 0000000..70a1001 --- /dev/null +++ b/debian/inn2.preinst @@ -0,0 +1,28 @@ +#!/bin/sh -e + +if [ "$2" ] && dpkg --compare-versions $2 gt 2.0.0 \ + && dpkg --compare-versions $2 lt 2.3.0; then + echo "Some configuration files have changed in INN 2.4 and will need to" + echo "be adjusted, most notably nnrp.access has mutated into readers.conf." + echo "Also, note that you may need to rebuild the history database." + echo "For more information, read the /usr/share/doc/inn2/NEWS.gz file." +fi + +if [ "$2" ] && dpkg --compare-versions $2 eq 2.3.0-1; then + echo 'Upgrade from 2.3.0-1 to >= 2.3.999+20030125-4 is not supported.' + echo 'Aborting inn upgrade.' + exit 1 +fi + +if [ "$2" ] && dpkg --compare-versions $2 lt 2.3.1-2; then + # remove any remaining symlinks under /usr/lib/news/bin/filter, then remove + # the directory if it's empty + if [ -d /usr/lib/news/bin/filter ]; then + find /usr/lib/news/bin/filter -type l -exec rm {} \; + rmdir /usr/lib/news/bin/filter 2> /dev/null || true + fi +fi + +#DEBHELPER# + +exit 0 diff --git a/debian/inn2.prerm b/debian/inn2.prerm new file mode 100644 index 0000000..b111c76 --- /dev/null +++ b/debian/inn2.prerm @@ -0,0 +1,25 @@ +#!/bin/sh -e + +kill_innd() { + if [ -x /etc/init.d/inn2 ]; then + invoke-rc.d inn2 stop + fi +} + +case "$1" in + remove|deconfigure|failed-upgrade) + kill_innd + ;; + + upgrade) + ;; + + *) + echo "$0 called with unknown argument '$1'" >&2 + exit 1 + ;; +esac + +#DEBHELPER# + +exit 0 diff --git a/debian/patches/configure-hostname b/debian/patches/configure-hostname new file mode 100644 index 0000000..874915a --- /dev/null +++ b/debian/patches/configure-hostname @@ -0,0 +1,11 @@ +--- a/configure ++++ b/configure +@@ -5839,7 +5839,7 @@ else + fi + + +-HOSTNAME=`hostname 2> /dev/null || uname -n` ++HOSTNAME=server.example.net + + + if test $ac_cv_prog_gcc = yes; then diff --git a/debian/patches/debian-paths b/debian/patches/debian-paths new file mode 100644 index 0000000..38327a5 --- /dev/null +++ b/debian/patches/debian-paths @@ -0,0 +1,10 @@ +--- a/samples/buffindexed.conf ++++ b/samples/buffindexed.conf +@@ -7,5 +7,5 @@ + # index(0-65535) : path to buffer file : + # length of buffer in kilobytes in decimal (1KB = 1024 bytes) + +-0:/var/news/spool/overview/OV1:1536000 +-1:/var/news/spool/overview/OV2:1536000 ++0:/var/spool/news/overview/OV1:1536000 ++1:/var/spool/news/overview/OV2:1536000 diff --git a/debian/patches/fix_ad_flag b/debian/patches/fix_ad_flag new file mode 100644 index 0000000..a2e4862 --- /dev/null +++ b/debian/patches/fix_ad_flag @@ -0,0 +1,15 @@ +honour the Ad flag in newsfeeds + +http://inn.eyrie.org/viewcvs/branches/2.4/innd/art.c?r1=7748&r2=7936&pathrev=7936&view=patch + +--- 2.4/innd/art.c 2008/04/06 13:49:56 7748 ++++ 2.4/innd/art.c 2008/07/20 10:20:41 7936 +@@ -1725,7 +1725,7 @@ + !DISTwantany(sp->Distributions, list)) + /* Not in the site's desired list of distributions. */ + continue; +- if (sp->DistRequired && list == NULL) ++ if (sp->DistRequired && (list == NULL || *list == NULL)) + /* Site requires Distribution header and there isn't one. */ + continue; + diff --git a/debian/patches/fix_body_regexps b/debian/patches/fix_body_regexps new file mode 100644 index 0000000..9a0aedd --- /dev/null +++ b/debian/patches/fix_body_regexps @@ -0,0 +1,82 @@ +Fix the correct handling of bodies (Perl regexps were sometimes +not properly working on SV * bodies). We now use a shared string. +For Perl < 5.7.1, fall back to a copy of such bodies. At least, +that method is reliable, even though it were 17% slower. + +http://inn.eyrie.org/viewcvs/branches/2.4/include/ppport.h?r1=7237&r2=7951&pathrev=7951&view=patch +http://inn.eyrie.org/viewcvs/branches/2.4/innd/perl.c?r1=7815&r2=7951&pathrev=7951&view=patch + +--- 2.4/innd/perl.c 2008/05/05 08:43:58 7815 ++++ 2.4/innd/perl.c 2008/08/05 19:41:17 7951 +@@ -69,7 +69,6 @@ + CV * filter; + int i, rc; + char * p; +- static SV * body = NULL; + static char buf[256]; + + if (!PerlFilterActive) return NULL; +@@ -87,23 +86,19 @@ + } + + /* Store the article body. We don't want to make another copy of it, +- since it could potentially be quite large. Instead, stash the +- pointer in the static SV * body. We set LEN to 0 and inc the +- refcount to tell Perl not to free it (either one should be enough). +- Requires 5.004. In testing, this produced a 17% speed improvement +- over making a copy of the article body for a fairly heavy filter. */ ++ * since it could potentially be quite large. In testing, this produced ++ * a 17% speed improvement over making a copy of the article body ++ * for a fairly heavy filter. ++ * Available since Perl 5.7.1, newSVpvn_share allows to avoid such ++ * a copy (getting round its use for older versions of Perl leads ++ * to unreliable SV * bodies as for regexps). And for Perl not to ++ * compute a hash for artBody, we give it "42". */ + if (artBody) { +- if (!body) { +- body = newSV(0); +- (void) SvUPGRADE(body, SVt_PV); +- } +- SvPVX(body) = artBody; +- SvCUR_set(body, artLen); +- SvLEN_set(body, 0); +- SvPOK_on(body); +- (void) SvREADONLY_on(body); +- (void) SvREFCNT_inc(body); +- hv_store(hdr, "__BODY__", 8, body, 0); ++#if (PERL_REVISION == 5) && ((PERL_VERSION < 7) || ((PERL_VERSION == 7) && (PERL_SUBVERSION < 1))) ++ hv_store(hdr, "__BODY__", 8, newSVpv(artBody, artLen), 0); ++#else ++ hv_store(hdr, "__BODY__", 8, newSVpvn_share(artBody, artLen, 42), 0); ++#endif /* Perl < 5.7.1 */ + } + + hv_store(hdr, "__LINES__", 9, newSViv(lines), 0); +--- 2.4/include/ppport.h 2005/06/05 21:57:50 7237 ++++ 2.4/include/ppport.h 2008/08/05 19:41:17 7951 +@@ -150,6 +150,7 @@ + # endif + #endif + #ifndef PERL_VERSION ++# define PERL_REVISION (5) + # ifdef PERL_PATCHLEVEL + # define PERL_VERSION PERL_PATCHLEVEL + # else +@@ -162,7 +163,7 @@ + # define ERRSV perl_get_sv("@",false) + #endif + +-#if (PERL_VERSION < 4) || ((PERL_VERSION == 4) && (PERL_SUBVERSION <= 4)) ++#if (PERL_REVISION == 5) && ((PERL_VERSION < 4) || ((PERL_VERSION == 4) && (PERL_SUBVERSION <= 4))) + # define PL_sv_undef sv_undef + # define PL_sv_yes sv_yes + # define PL_sv_no sv_no +@@ -174,7 +175,7 @@ + # define PL_copline copline + #endif + +-#if (PERL_VERSION < 5) ++#if (PERL_REVISION == 5) && (PERL_VERSION < 5) + # undef dTHR + # ifdef WIN32 + # define dTHR extern int Perl___notused diff --git a/debian/patches/no-makedbz-on-install b/debian/patches/no-makedbz-on-install new file mode 100644 index 0000000..63b3f65 --- /dev/null +++ b/debian/patches/no-makedbz-on-install @@ -0,0 +1,11 @@ +--- a/site/Makefile ++++ b/site/Makefile +@@ -116,7 +116,7 @@ config: $(ALL) + ## Don't use parallel rules -- we want this to be viewed carefully. + install: all $(PAUSE) install-config $(RELOAD_AND_GO) + reload-install: all pause install-config reload go +-install-config: update $(REST_INSTALLED) $(SPECIAL) ++install-config: update $(REST_INSTALLED) #$(SPECIAL) + + ## Install scripts, not per-host config files. + update: all $(MOST_INSTALLED) diff --git a/debian/patches/nocem-gpg-import b/debian/patches/nocem-gpg-import new file mode 100644 index 0000000..0c15727 --- /dev/null +++ b/debian/patches/nocem-gpg-import @@ -0,0 +1,28 @@ +--- a/control/perl-nocem.in ++++ b/control/perl-nocem.in +@@ -521,7 +521,9 @@ Processing NoCeM notices is easy to set + Import the keys of the NoCeM issuers you trust in order to check + the authenticity of their notices. You can do: + +- gpg --no-default-keyring --primary-keyring /pgp/ncmring.gpg --import ++ gpg --no-default-keyring --primary-keyring /pgp/ncmring.gpg \ ++ --no-options --allow-non-selfsigned-uid --no-permission-warning \ ++ --batch --import + + where is the value of the I parameter set in F + and the file containing the key(s) to import. The keyring +--- a/doc/man/perl-nocem.8 ++++ b/doc/man/perl-nocem.8 +@@ -157,8 +157,10 @@ Processing NoCeM notices is easy to set + Import the keys of the NoCeM issuers you trust in order to check + the authenticity of their notices. You can do: + .Sp +-.Vb 1 +-\& gpg \-\-no\-default\-keyring \-\-primary\-keyring /pgp/ncmring.gpg \-\-import ++.Vb 3 ++\& gpg \-\-no\-default\-keyring \-\-primary\-keyring=/etc/news/pgp/ncmring.gpg \e ++\& \-\-no\-options \-\-allow\-non\-selfsigned\-uid \-\-no\-permission\-warning \e ++\& \-\-batch \-\-import + .Ve + .Sp + where is the value of the \fIpathetc\fR parameter set in \fIinn.conf\fR diff --git a/debian/patches/series b/debian/patches/series new file mode 100644 index 0000000..9263031 --- /dev/null +++ b/debian/patches/series @@ -0,0 +1,20 @@ +# backported fixes +fix_ad_flag +fix_body_regexps + +# waiting to be merged upstream + +# debian-specific +nocem-gpg-import +debian-paths + +# packaging-related +configure-hostname +no-makedbz-on-install +u_innreport_misc +u_right_length +u_status_init_ip +u_tls_duplicate_reply +u_xhdr_permissions +u_xover_duplicate_reply +typo_inn_conf_man diff --git a/debian/patches/typo_inn_conf_man b/debian/patches/typo_inn_conf_man new file mode 100644 index 0000000..39c2a91 --- /dev/null +++ b/debian/patches/typo_inn_conf_man @@ -0,0 +1,11 @@ +--- a/doc/man/inn.conf.5 ++++ b/doc/man/inn.conf.5 +@@ -480,7 +480,7 @@ this parameter must be set if \fIenableo + .el .IP "\f(CWbuffindexed\fR" 4 + .IX Item "buffindexed" + Stores overview data and index information into buffers, which are +-preconfigured files defined in \fIbuffinedexed.conf\fR. \f(CW\*(C`buffindexed\*(C'\fR never ++preconfigured files defined in \fIbuffindexed.conf\fR. \f(CW\*(C`buffindexed\*(C'\fR never + consumes additional disk space beyond that allocated to these buffers. + .ie n .IP """tradindexed""" 4 + .el .IP "\f(CWtradindexed\fR" 4 diff --git a/debian/patches/u_innreport_misc b/debian/patches/u_innreport_misc new file mode 100644 index 0000000..0b4e7d2 --- /dev/null +++ b/debian/patches/u_innreport_misc @@ -0,0 +1,136 @@ +Bug-fixes for innreport: + - Test for the existence of 'img_dir' instead of 'html_dir' in innreport; + - Trailing comma after %innfeed_spooled with "Outgoing feeds (innfeed) + by Articles"; + - Column "Total" of "Outgoing feeds (innfeed) by Volume" tries to add + two hashes which evaluates to a constant 0; + - Gracefully handle undefined hash elements in "NNRP readership statistics + (by domain)"; + - Also added two error messages generated by perl-nocem. + +http://inn.eyrie.org/viewcvs/branches/2.4/scripts/innreport.in?r1=8142&r2=8141&pathrev=8142&view=patch +http://inn.eyrie.org/viewcvs/branches/2.4/samples/innreport.conf.in?r1=7945&r2=7944&pathrev=7945&view=patch +http://inn.eyrie.org/viewcvs/branches/2.4/scripts/innreport_inn.pm?r1=7945&r2=7944&pathrev=7945&view=patch + +--- 2.4/scripts/innreport.in 2008/10/05 23:47:25 8141 ++++ 2.4/scripts/innreport.in 2008/10/07 17:08:32 8142 +@@ -212,7 +212,7 @@ + $IMG_pth = $ref{'webpath'} if defined $ref{'webpath'}; + + $IMG_dir = $HTML_dir . "/" . $IMG_pth +- if (defined $output{'default'}{'html_dir'} || ++ if (defined $output{'default'}{'img_dir'} || + defined $ref{'w'} || defined $ref{'webpath'}) + && + (defined $output{'default'}{'html_dir'} || +--- 2.4/samples/innreport.conf.in 2008/08/03 07:30:03 7944 ++++ 2.4/samples/innreport.conf.in 2008/08/03 07:47:10 7945 +@@ -1267,7 +1267,7 @@ + data { + name "Spooled"; + color "#AF00FF"; +- value "%innfeed_spooled,"; ++ value "%innfeed_spooled"; + }; + }; + }; +@@ -1347,12 +1347,6 @@ + color "#FFAF00"; + value "%innfeed_rejected_size"; + }; +- data { +- name "Total"; +- color "#00FF00"; +- value "%innfeed_accepted_size + +- %innfeed_rejected_size"; +- }; + }; + }; + +@@ -2116,8 +2110,8 @@ + name "Rej"; + format_name "%4s"; + format "%4d"; +- value "$nnrpd_post_rej{$key} + +- $nnrpd_post_error{$key}"; ++ value "($nnrpd_post_rej{$key}||0) + ++ ($nnrpd_post_error{$key}||0)"; + total "total(%nnrpd_post_rej) + + total(%nnrpd_post_error)"; + }; +@@ -2179,8 +2173,8 @@ + name "Rej"; + format_name "%4s"; + format "%4d"; +- value "$nnrpd_dom_post_rej{$key} + +- $nnrpd_dom_post_error{$key}"; ++ value "($nnrpd_dom_post_rej{$key}||0) + ++ ($nnrpd_dom_post_error{$key}||0)"; + total "total(%nnrpd_dom_post_rej) + + total(%nnrpd_dom_post_error)"; + }; +--- 2.4/scripts/innreport_inn.pm 2008/08/03 07:30:03 7944 ++++ 2.4/scripts/innreport_inn.pm 2008/08/03 07:47:10 7945 +@@ -440,8 +440,8 @@ + # The exact timers change from various versions of INN, so try to deal + # with this in a general fashion. + if ($left =~ m/^\S+\s+ # ME +- time\ (\d+)\s+ # time +- ((?:\S+\ \d+\(\d+\)\s*)+) # timer values ++ time\s(\d+)\s+ # time ++ ((?:\S+\s\d+\(\d+\)\s*)+) # timer values + $/ox) { + $innd_time_times += $1; + my $timers = $2; +@@ -719,8 +719,8 @@ + # ME time X nnnn X(X) [...] + return 1 if $left =~ m/backlogstats/; + if ($left =~ m/^\S+\s+ # ME +- time\ (\d+)\s+ # time +- ((?:\S+\ \d+\(\d+\)\s*)+) # timer values ++ time\s(\d+)\s+ # time ++ ((?:\S+\s\d+\(\d+\)\s*)+) # timer values + $/ox) { + $innfeed_time_times += $1; + my $timers = $2; +@@ -1459,8 +1459,8 @@ + # The exact timers change from various versions of INN, so try to deal + # with this in a general fashion. + if ($left =~ m/^\S+\s+ # ME +- time\ (\d+)\s+ # time +- ((?:\S+\ \d+\(\d+\)\s*)+) # timer values ++ time\s(\d+)\s+ # time ++ ((?:\S+\s\d+\(\d+\)\s*)+) # timer values + $/ox) { + $nnrpd_time_times += $1; + my $timers = $2; +@@ -1683,13 +1683,28 @@ + $nocem_totalids{$nocem_lastid} += $2; + return 1; + } +- if ($left =~ /bad signature from (.*)/o) { ++ if ($left =~ /Article <[^>]*>: (.*) \(ID [[:xdigit:]]*\) not in keyring/o) { ++ $nocem_badsigs{$1}++; ++ $nocem_goodsigs{$1} = 0 unless ($nocem_goodsigs{$1}); ++ $nocem_totalbad++; ++ $nocem_lastid = $1; ++ return 1; ++ } ++ if ($left =~ /Article <[^>]*>: bad signature from (.*)/o) { + $nocem_badsigs{$1}++; + $nocem_goodsigs{$1} = 0 unless ($nocem_goodsigs{$1}); + $nocem_totalbad++; + $nocem_lastid = $1; + return 1; + } ++ if ($left =~ /Article <[^>]*>: malformed signature/o) { ++ $nocem_badsigs{'N/A'}++; ++ $nocem_goodsigs{'N/A'} = 0 unless ($nocem_goodsigs{'N/A'}); ++ $nocem_totalbad++; ++ $nocem_lastid = 'N/A'; ++ return 1; ++ } ++ + return 1; + } + diff --git a/debian/patches/u_right_length b/debian/patches/u_right_length new file mode 100644 index 0000000..b8cc72e --- /dev/null +++ b/debian/patches/u_right_length @@ -0,0 +1,15 @@ +Bug-fix for TLS: return 1 when length is right. + +http://inn.eyrie.org/viewcvs/branches/2.4/nnrpd/tls.c?r1=8058&r2=8057&pathrev=8058&view=patch + +--- 2.4/nnrpd/tls.c 2008/09/26 23:11:47 8057 ++++ 2.4/nnrpd/tls.c 2008/09/26 23:12:49 8058 +@@ -257,7 +257,7 @@ + X509_verify_cert_error_string(err)); + + if (verify_depth >= depth) { +- ok = 0; ++ ok = 1; + verify_error = X509_V_OK; + } else { + ok = 0; diff --git a/debian/patches/u_status_init_ip b/debian/patches/u_status_init_ip new file mode 100644 index 0000000..e91b1e3 --- /dev/null +++ b/debian/patches/u_status_init_ip @@ -0,0 +1,26 @@ +Fix a bug in the IP address displayed for localhost in innd's status file. +It was not correctly initialized (it is a local connection which does not +use any IP address). + +http://inn.eyrie.org/viewcvs/branches/2.4/innd/status.c?r1=7947&r2=7946&pathrev=7947&view=patch + +--- 2.4/innd/status.c 2008/08/03 07:50:03 7946 ++++ 2.4/innd/status.c 2008/08/03 07:55:20 7947 +@@ -153,9 +153,14 @@ + status = xmalloc(sizeof(STATUS)); + peers++; /* a new peer */ + strlcpy(status->name, TempString, sizeof(status->name)); +- strlcpy(status->ip_addr, +- sprint_sockaddr((struct sockaddr *)&cp->Address), +- sizeof(status->ip_addr)); ++ if (cp->Address.ss_family == 0) { ++ /* Connections from lc.c do not have an IP address. */ ++ memset(&status->ip_addr, 0, sizeof(status->ip_addr)); ++ } else { ++ strlcpy(status->ip_addr, ++ sprint_sockaddr((struct sockaddr *)&cp->Address), ++ sizeof(status->ip_addr)); ++ } + status->can_stream = cp->Streaming; + status->seconds = status->Size = status->DuplicateSize = 0; + status->Ihave = status->Ihave_Duplicate = diff --git a/debian/patches/u_tls_duplicate_reply b/debian/patches/u_tls_duplicate_reply new file mode 100644 index 0000000..1b6c01e --- /dev/null +++ b/debian/patches/u_tls_duplicate_reply @@ -0,0 +1,15 @@ +Do not send 580 when negotiation fails (382 has already been sent). + +http://inn.eyrie.org/viewcvs/branches/2.4/nnrpd/misc.c?r1=8057&r2=8056&pathrev=8057&view=patch + +--- 2.4/nnrpd/misc.c 2008/09/26 23:02:08 8056 ++++ 2.4/nnrpd/misc.c 2008/09/26 23:11:47 8057 +@@ -544,7 +544,7 @@ + result=tls_start_servertls(0, /* read */ + 1); /* write */ + if (result==-1) { +- Reply("%d Starttls failed\r\n", NNTP_STARTTLS_BAD_VAL); ++ /* No reply because we have already sent NNTP_STARTTLS_NEXT_VAL. */ + return; + } + nnrpd_starttls_done = 1; diff --git a/debian/patches/u_xhdr_permissions b/debian/patches/u_xhdr_permissions new file mode 100644 index 0000000..8e876af --- /dev/null +++ b/debian/patches/u_xhdr_permissions @@ -0,0 +1,49 @@ +XHDR and XPAT were not checking the permissions the user has to read +articles when using a message-ID. Now fixed, as well as calls to ARTclose(). + +http://inn.eyrie.org/viewcvs/branches/2.4/nnrpd/article.c?r1=8004&r2=8003&pathrev=8004&view=patch + +--- 2.4/nnrpd/article.c 2008/09/05 19:13:28 8003 ++++ 2.4/nnrpd/article.c 2008/09/06 08:49:55 8004 +@@ -688,6 +688,7 @@ + if (ac > 1) + ARTnumber = tart; + if ((msgid = GetHeader("Message-ID")) == NULL) { ++ ARTclose(); + Reply("%s\r\n", ARTnoartingroup); + return; + } +@@ -745,9 +746,9 @@ + if (!ARTopen(ARTnumber)) + continue; + msgid = GetHeader("Message-ID"); ++ ARTclose(); + } while (msgid == NULL); + +- ARTclose(); + Reply("%d %d %s Article retrieved; request text separately.\r\n", + NNTP_NOTHING_FOLLOWS_VAL, ARTnumber, msgid); + } +@@ -1008,6 +1009,12 @@ + Printf("%d No such article.\r\n", NNTP_DONTHAVEIT_VAL); + break; + } ++ if (!PERMartok()) { ++ ARTclose(); ++ Printf("%s\r\n", NOACCESS); ++ break; ++ } ++ + Printf("%d %s matches follow (ID)\r\n", NNTP_HEAD_FOLLOWS_VAL, + header); + if ((text = GetHeader(header)) != NULL +@@ -1047,8 +1054,8 @@ + SendIOb(buff, strlen(buff)); + SendIOb(p, strlen(p)); + SendIOb("\r\n", 2); +- ARTclose(); + } ++ ARTclose(); + } + SendIOb(".\r\n", 3); + PushIOb(); diff --git a/debian/patches/u_xover_duplicate_reply b/debian/patches/u_xover_duplicate_reply new file mode 100644 index 0000000..88a53b5 --- /dev/null +++ b/debian/patches/u_xover_duplicate_reply @@ -0,0 +1,30 @@ +Fix a bug in the replies of XOVER/XHDR/XPAT when the group is empty. +Two initial replies were sent. + +http://inn.eyrie.org/viewcvs/branches/2.4/nnrpd/article.c?r1=8000&r2=7999&pathrev=8000&view=patch + +--- 2.4/nnrpd/article.c 2008/09/03 05:41:27 7999 ++++ 2.4/nnrpd/article.c 2008/09/04 17:06:51 8000 +@@ -854,9 +854,7 @@ + + /* Parse range. */ + if (!CMDgetrange(ac, av, &range, &DidReply)) { +- if (!DidReply) { +- Reply("%d data follows\r\n", NNTP_OVERVIEW_FOLLOWS_VAL); +- Printf(".\r\n"); ++ if (DidReply) { + return; + } + } +@@ -1028,10 +1026,7 @@ + + /* Range specified. */ + if (!CMDgetrange(ac - 1, av + 1, &range, &DidReply)) { +- if (!DidReply) { +- Reply("%d %s no matches follow (range)\r\n", +- NNTP_HEAD_FOLLOWS_VAL, header ? header : "\"\""); +- Printf(".\r\n"); ++ if (DidReply) { + break; + } + } diff --git a/debian/rules b/debian/rules new file mode 100755 index 0000000..91d2e12 --- /dev/null +++ b/debian/rules @@ -0,0 +1,198 @@ +#!/usr/bin/make -f +SHELL+= -e + +QUILT_STAMPFN := .stamp-patched +include /usr/share/quilt/quilt.make + +D-std := $(CURDIR)/debian/inn2 +D-lfs := $(CURDIR)/debian/inn2-lfs +D = $(D-$*) +B = $(CURDIR)/build-$* + +############################################################################## +# this code deals with building a second inn2-lfs package from the same +# source, but only on 32 bit architectures +# Ideally new future 32 bit architectures should not bother with inn2-lfs +# and just enable LFS by default. + +DEB_HOST_ARCH ?= $(shell dpkg-architecture -qDEB_HOST_ARCH) +ifeq ($(DEB_HOST_ARCH),$(filter $(DEB_HOST_ARCH),amd64 ia64 ppc64 s390x)) +# 64 bit std package +FLAVORS := std +else ifeq ($(DEB_HOST_ARCH),$(filter $(DEB_HOST_ARCH),armel)) +# 32 bit LFS std package +FLAVORS := std +std_configure_flags = --enable-largefiles +else +# 32 bit std package and 32 bit LFS lfs package +FLAVORS := std lfs +lfs_configure_flags = --enable-largefiles +endif + +std_dh_clean_opts = -pinn2 -pinn2-inews -p inn2-dev +lfs_dh_clean_opts = -pinn2-lfs +std_dh_movefiles_opts = -pinn2 -pinn2-inews -p inn2-dev +lfs_dh_movefiles_opts = -pinn2-lfs -pinn2-lfs-inews -p inn2-lfs-dev + +ifeq ($(FLAVORS),std) +no_package := --no-package=inn2-lfs +endif + +# the upstream source needs to be copied in the flavor-specific build dirs +src_files := $(shell find . -maxdepth 1 \ + -not -name . -and -not -name debian -and -not -name .pc \ + -and -not -name 'build-*' -and -not -name '.stamp-*') + +############################################################################## +DEB_HOST_GNU_TYPE ?= $(shell dpkg-architecture -qDEB_HOST_GNU_TYPE) +DEB_BUILD_GNU_TYPE ?= $(shell dpkg-architecture -qDEB_BUILD_GNU_TYPE) +ifeq ($(DEB_BUILD_GNU_TYPE),$(DEB_HOST_GNU_TYPE)) + configure_flags += --build $(DEB_HOST_GNU_TYPE) +else + configure_flags += --build $(DEB_BUILD_GNU_TYPE) --host $(DEB_HOST_GNU_TYPE) +endif + +clean: unpatch + rm -rf .stamp-* build-* + [ ! -f Makefile.global ] || $(MAKE) distclean + # delete packages which are not in control but are built anyway + rm -rf debian/inn2-lfs-dev/ debian/inn2-lfs-inews/ + # delete the cloned debhelper configuration and logs + find debian -maxdepth 1 -name 'inn2-lfs*' -not -type d -print0 \ + | xargs --no-run-if-empty -0 rm + dh_clean + +configure: $(addprefix .stamp-configure-, $(FLAVORS)) +.stamp-configure-%: $(QUILT_STAMPFN) + dh_testdir + mkdir -p $B + for dir in $(src_files); do cp -ldpR $$dir $B; done + cd $B && \ + _PATH_PERL=/usr/bin/perl \ + ac_cv_path__PATH_AWK=awk \ + ac_cv_path__PATH_EGREP=egrep \ + ac_cv_path__PATH_SED=sed \ + ac_cv_path__PATH_SORT=sort \ + ac_cv_path__PATH_UUX=uux \ + ac_cv_path_PATH_GPGV=/usr/bin/gpgv \ + ac_cv_path_GETFTP=wget \ + ac_cv_search_dbm_open=-ldb \ + LDFLAGS="-Wl,--as-needed $(LDFLAGS)" \ + ./configure \ + --with-perl \ + --enable-ipv6 \ + --prefix=/usr/lib/news \ + --mandir=/usr/share/man \ + --includedir=/usr/include/inn \ + --with-db-dir=/var/lib/news \ + --with-etc-dir=/etc/news \ + --with-filter-dir=/etc/news/filter \ + --with-lib-dir=/usr/lib/news \ + --with-log-dir=/var/log/news \ + --with-run-dir=/var/run/news \ + --with-spool-dir=/var/spool/news \ + --with-tmp-dir=/var/spool/news/incoming/tmp \ + --with-berkeleydb=/usr \ + --with-kerberos=/usr \ + --with-sendmail=/usr/sbin/sendmail \ + $($*_configure_flags) $(configure_flags) + cd $B && \ + mkdir ssl/ ssl/nnrpd/ && \ + cd ssl/ && \ + ln -s ../Makefile.global ../include ../storage ../history . && \ + cd nnrpd/ && ln -s ../../nnrpd/* . + touch $@ + +build: $(addprefix .stamp-build-, $(FLAVORS)) +.stamp-build-%: .stamp-configure-% + dh_testdir + cd $B && $(MAKE) + cd $B/ssl/nnrpd/ && $(MAKE) \ + SSLLIB='-L/usr/lib -lssl -lcrypto -ldl' SSLINC='-DHAVE_SSL=1' + touch $@ + +install1-%: .stamp-build-% + dh_testdir + dh_testroot + dh_clean -k $($*_dh_clean_opts) + + cd $B && $(MAKE) install DESTDIR=$D + sh -e extra/dh_cloneconf inn2 inn2-lfs + + dh_movefiles $($*_dh_movefiles_opts) --sourcedir=$(subst $(CURDIR)/,,$D) + +# move back this one + mv $D-dev/usr/share/man/man3/uwildmat.3 $D/usr/share/man/man3/ + +# remove assorted crap and +# make sure we don't ship active, active.times, newsgroups in place! + cd $D/etc/news/filter && rm -f *.py *.tcl + rm -rf $D/usr/lib/news/bin/simpleftp $D/usr/share/man/man1/simpleftp.1\ + $D/usr/lib/news/doc/ $D/var/lib/news/* \ + $D/usr/include/ + + mv $D/usr/share/man/man1/startinnfeed.1 \ + $D/usr/share/man/man8/startinnfeed.8 + + cp $B/ssl/nnrpd/nnrpd $D/usr/lib/news/bin/nnrpd-ssl + install -m 755 extra/buildinnkeyring extra/ginpaths2 \ + $D/usr/lib/news/bin/ + install -m 755 contrib/showtoken.in $D/usr/lib/news/bin/showtoken + install -m 755 extra/bunbatch $D-inews/usr/lib/news/bin/rnews.libexec/ + + install -m 644 extra/send-uucp.cf extra/sasl.conf $D/etc/news/ + + mkdir $D/var/log/news/path + +install2: $(addprefix install1-, $(FLAVORS)) + dh_link + dh_installchangelogs NEWS + dh_installdocs + dh_installexamples + dh_installinit --noscripts --init-script=inn2 + dh_installcron + dh_installlogcheck + dh_compress + dh_fixperms \ + -Xusr/lib/news/bin/inndstart -Xusr/lib/news/bin/startinnfeed + # some files are not writeable when installed by make install + dh_strip + +install3-%: install2 + chown root:news $D-inews/etc/news/passwd.nntp + chmod 640 $D-inews/etc/news/passwd.nntp + + chmod -x $D/usr/lib/news/bin/control/*.pl + chmod +rw \ + $D/usr/lib/news/bin/inndstart \ + $D/usr/lib/news/bin/startinnfeed + + chown news:uucp $D-inews/usr/lib/news/bin/rnews + chmod 4755 $D-inews/usr/lib/news/bin/rnews + + chown -R news:news $D/var/spool/news/ $D/var/lib/news/ \ + $D/var/run/news/ $D/var/log/news/ + chmod -R g+w $D/var/spool/news/ $D/var/lib/news/ \ + $D/var/run/news/ $D/var/log/news/ + +install4-std: install3-std + +# lfs-specific: rename some files installed by debhelper +install4-lfs: install3-lfs + for file in /etc/logcheck/ignore.d.server/inn2 /etc/logcheck/violations.ignore.d/inn2 /etc/cron.d/inn2; do \ + mv $(D-lfs)$$file-lfs $(D-lfs)$$file; \ + done + +install5: $(addprefix install4-, $(FLAVORS)) + dh_installdeb + dh_md5sums + dh_shlibdeps + dh_gencontrol $(no_package) -- \ + -VPERLAPI=$$(perl -MConfig -e 'print "perlapi-$$Config{version}"') + dh_builddeb $(no_package) + +binary-arch: install5 + +binary: binary-arch + +.PHONY: clean configure build binary-arch binary install% diff --git a/debian/watch b/debian/watch new file mode 100644 index 0000000..39ffdf8 --- /dev/null +++ b/debian/watch @@ -0,0 +1,3 @@ +version=3 +opts=dversionmangle=s/r$// \ +ftp://ftp.isc.org/isc/inn/inn-([\d\.]+)\.tar\.gz diff --git a/doc/GPL b/doc/GPL new file mode 100644 index 0000000..264509e --- /dev/null +++ b/doc/GPL @@ -0,0 +1,347 @@ +[ Please note that the only portions of INN covered by this license are + those files explicitly noted as being under the GPL in LICENSE. It is + a requirement of the GPL, however, that a copy of it be distributed + with software licensed under it, and some stand-alone programs that + are distributed with INN are covered under the GPL. ] + + + GNU GENERAL PUBLIC LICENSE + Version 2, June 1991 + + Copyright (C) 1989, 1991 Free Software Foundation, Inc. + 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The licenses for most software are designed to take away your +freedom to share and change it. By contrast, the GNU General Public +License is intended to guarantee your freedom to share and change free +software--to make sure the software is free for all its users. This +General Public License applies to most of the Free Software +Foundation's software and to any other program whose authors commit to +using it. (Some other Free Software Foundation software is covered by +the GNU Library General Public License instead.) You can apply it to +your programs, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +this service if you wish), that you receive source code or can get it +if you want it, that you can change the software or use pieces of it +in new free programs; and that you know you can do these things. + + To protect your rights, we need to make restrictions that forbid +anyone to deny you these rights or to ask you to surrender the rights. +These restrictions translate to certain responsibilities for you if you +distribute copies of the software, or if you modify it. + + For example, if you distribute copies of such a program, whether +gratis or for a fee, you must give the recipients all the rights that +you have. You must make sure that they, too, receive or can get the +source code. And you must show them these terms so they know their +rights. + + We protect your rights with two steps: (1) copyright the software, and +(2) offer you this license which gives you legal permission to copy, +distribute and/or modify the software. + + Also, for each author's protection and ours, we want to make certain +that everyone understands that there is no warranty for this free +software. If the software is modified by someone else and passed on, we +want its recipients to know that what they have is not the original, so +that any problems introduced by others will not reflect on the original +authors' reputations. + + Finally, any free program is threatened constantly by software +patents. We wish to avoid the danger that redistributors of a free +program will individually obtain patent licenses, in effect making the +program proprietary. To prevent this, we have made it clear that any +patent must be licensed for everyone's free use or not licensed at all. + + The precise terms and conditions for copying, distribution and +modification follow. + + GNU GENERAL PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. This License applies to any program or other work which contains +a notice placed by the copyright holder saying it may be distributed +under the terms of this General Public License. The "Program", below, +refers to any such program or work, and a "work based on the Program" +means either the Program or any derivative work under copyright law: +that is to say, a work containing the Program or a portion of it, +either verbatim or with modifications and/or translated into another +language. (Hereinafter, translation is included without limitation in +the term "modification".) Each licensee is addressed as "you". + +Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of +running the Program is not restricted, and the output from the Program +is covered only if its contents constitute a work based on the +Program (independent of having been made by running the Program). +Whether that is true depends on what the Program does. + + 1. You may copy and distribute verbatim copies of the Program's +source code as you receive it, in any medium, provided that you +conspicuously and appropriately publish on each copy an appropriate +copyright notice and disclaimer of warranty; keep intact all the +notices that refer to this License and to the absence of any warranty; +and give any other recipients of the Program a copy of this License +along with the Program. + +You may charge a fee for the physical act of transferring a copy, and +you may at your option offer warranty protection in exchange for a fee. + + 2. You may modify your copy or copies of the Program or any portion +of it, thus forming a work based on the Program, and copy and +distribute such modifications or work under the terms of Section 1 +above, provided that you also meet all of these conditions: + + a) You must cause the modified files to carry prominent notices + stating that you changed the files and the date of any change. + + b) You must cause any work that you distribute or publish, that in + whole or in part contains or is derived from the Program or any + part thereof, to be licensed as a whole at no charge to all third + parties under the terms of this License. + + c) If the modified program normally reads commands interactively + when run, you must cause it, when started running for such + interactive use in the most ordinary way, to print or display an + announcement including an appropriate copyright notice and a + notice that there is no warranty (or else, saying that you provide + a warranty) and that users may redistribute the program under + these conditions, and telling the user how to view a copy of this + License. (Exception: if the Program itself is interactive but + does not normally print such an announcement, your work based on + the Program is not required to print an announcement.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Program, +and can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based +on the Program, the distribution of the whole must be on the terms of +this License, whose permissions for other licensees extend to the +entire whole, and thus to each and every part regardless of who wrote it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Program. + +In addition, mere aggregation of another work not based on the Program +with the Program (or with a work based on the Program) on a volume of +a storage or distribution medium does not bring the other work under +the scope of this License. + + 3. You may copy and distribute the Program (or a work based on it, +under Section 2) in object code or executable form under the terms of +Sections 1 and 2 above provided that you also do one of the following: + + a) Accompany it with the complete corresponding machine-readable + source code, which must be distributed under the terms of Sections + 1 and 2 above on a medium customarily used for software interchange; or, + + b) Accompany it with a written offer, valid for at least three + years, to give any third party, for a charge no more than your + cost of physically performing source distribution, a complete + machine-readable copy of the corresponding source code, to be + distributed under the terms of Sections 1 and 2 above on a medium + customarily used for software interchange; or, + + c) Accompany it with the information you received as to the offer + to distribute corresponding source code. (This alternative is + allowed only for noncommercial distribution and only if you + received the program in object code or executable form with such + an offer, in accord with Subsection b above.) + +The source code for a work means the preferred form of the work for +making modifications to it. For an executable work, complete source +code means all the source code for all modules it contains, plus any +associated interface definition files, plus the scripts used to +control compilation and installation of the executable. However, as a +special exception, the source code distributed need not include +anything that is normally distributed (in either source or binary +form) with the major components (compiler, kernel, and so on) of the +operating system on which the executable runs, unless that component +itself accompanies the executable. + +If distribution of executable or object code is made by offering +access to copy from a designated place, then offering equivalent +access to copy the source code from the same place counts as +distribution of the source code, even though third parties are not +compelled to copy the source along with the object code. + + 4. You may not copy, modify, sublicense, or distribute the Program +except as expressly provided under this License. Any attempt +otherwise to copy, modify, sublicense or distribute the Program is +void, and will automatically terminate your rights under this License. +However, parties who have received copies, or rights, from you under +this License will not have their licenses terminated so long as such +parties remain in full compliance. + + 5. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Program or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Program (or any work based on the +Program), you indicate your acceptance of this License to do so, and +all its terms and conditions for copying, distributing or modifying +the Program or works based on it. + + 6. Each time you redistribute the Program (or any work based on the +Program), the recipient automatically receives a license from the +original licensor to copy, distribute or modify the Program subject to +these terms and conditions. You may not impose any further +restrictions on the recipients' exercise of the rights granted herein. +You are not responsible for enforcing compliance by third parties to +this License. + + 7. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot +distribute so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you +may not distribute the Program at all. For example, if a patent +license would not permit royalty-free redistribution of the Program by +all those who receive copies directly or indirectly through you, then +the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Program. + +If any portion of this section is held invalid or unenforceable under +any particular circumstance, the balance of the section is intended to +apply and the section as a whole is intended to apply in other +circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system, which is +implemented by public license practices. Many people have made +generous contributions to the wide range of software distributed +through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing +to distribute software through any other system and a licensee cannot +impose that choice. + +This section is intended to make thoroughly clear what is believed to +be a consequence of the rest of this License. + + 8. If the distribution and/or use of the Program is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Program under this License +may add an explicit geographical distribution limitation excluding +those countries, so that distribution is permitted only in or among +countries not thus excluded. In such case, this License incorporates +the limitation as if written in the body of this License. + + 9. The Free Software Foundation may publish revised and/or new versions +of the General Public License from time to time. Such new versions will +be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + +Each version is given a distinguishing version number. If the Program +specifies a version number of this License which applies to it and "any +later version", you have the option of following the terms and conditions +either of that version or of any later version published by the Free +Software Foundation. If the Program does not specify a version number of +this License, you may choose any version ever published by the Free Software +Foundation. + + 10. If you wish to incorporate parts of the Program into other free +programs whose distribution conditions are different, write to the author +to ask for permission. For software which is copyrighted by the Free +Software Foundation, write to the Free Software Foundation; we sometimes +make exceptions for this. Our decision will be guided by the two goals +of preserving the free status of all derivatives of our free software and +of promoting the sharing and reuse of software generally. + + NO WARRANTY + + 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY +FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN +OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES +PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED +OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF +MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS +TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE +PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, +REPAIR OR CORRECTION. + + 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR +REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, +INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING +OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED +TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY +YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER +PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE +POSSIBILITY OF SUCH DAMAGES. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +convey the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) 19yy + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 2 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + + +Also add information on how to contact you by electronic and paper mail. + +If the program is interactive, make it output a short notice like this +when it starts in an interactive mode: + + Gnomovision version 69, Copyright (C) 19yy name of author + Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. + +The hypothetical commands `show w' and `show c' should show the appropriate +parts of the General Public License. Of course, the commands you use may +be called something other than `show w' and `show c'; they could even be +mouse-clicks or menu items--whatever suits your program. + +You should also get your employer (if you work as a programmer) or your +school, if any, to sign a "copyright disclaimer" for the program, if +necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the program + `Gnomovision' (which makes passes at compilers) written by James Hacker. + + , 1 April 1989 + Ty Coon, President of Vice + +This General Public License does not permit incorporating your program into +proprietary programs. If your program is a subroutine library, you may +consider it more useful to permit linking proprietary applications with the +library. If this is what you want to do, use the GNU Library General +Public License instead of this License. diff --git a/doc/IPv6-info b/doc/IPv6-info new file mode 100644 index 0000000..4d5c02a --- /dev/null +++ b/doc/IPv6-info @@ -0,0 +1,47 @@ +Notes about IPv6 support in INN: + + This is $Revision: 5416 $, dated $Date: 2002-04-14 07:05:36 -0700 (Sun, 14 Apr 2002) $. + + This document contains some notes about the status of IPv6 support in + INN (see also the parts of the code marked FIXME): + + +Things that will break if you compile with --enable-ipv6: + + * innd can only be started via inndstart + * IP_OPTIONS are not cleared for any incoming connections to innd even + over IPv4 + + + +Some comments as of the completion of the original patch: + + Date: Wed, 13 Feb 2002 00:10:59 -0500 (EST) + From: Nathan Lutchansky + To: Jeffrey M. Vinocur + Subject: IPv6 patch notes + + The IPv6 patch is based directly on Marco d'Itri's IPv6 patch of + 2001-03-01 that was posted last year to the inn-workers list. The + patch applied fairly cleanly to a working copy from 2002-02-04, and + the resulting tree was used as the basis for my work. + + Modifications by Marco and myself were made so that if IPv6 support is + not explicitly enabled with the --enable-ipv6 flag to the configure + script, the old networking code will be used. Hopefully, nobody will + notice any problems with the default configuration, although some + changes have been made to data structures even when IPv6 is disabled. + + The original patch added IPv6 support to innd and inndstart, and the + auth_pass program. I have added support to nnrpd, innfeed, and the + ident auth program. There is no IPv6 support for imapfeed and other + auxiliary programs like the radius auth backend. + + Marco's patch made use of several preprocessor defines for + configuration but the defines were hand-coded, so I added the + corresponding tests the the configuration script. I make no + guarantees that the configure script will catch all possible + non-portable behavior; the IPv6 API standardization process has left + quite a wake of incompatible API implementations over the years. + -Nathan + diff --git a/doc/Makefile b/doc/Makefile new file mode 100644 index 0000000..faff9bf --- /dev/null +++ b/doc/Makefile @@ -0,0 +1,38 @@ +## $Id: Makefile 6017 2002-12-16 12:08:38Z alexk $ +## +## The only target that this Makefile need support is install. Everything +## else is a null target (and the top level Makefile shouldn't even attempt +## them in this directory). + +include ../Makefile.global + +top = .. + +TOPDOCS = CONTRIBUTORS HACKING INSTALL LICENSE NEWS README TODO + +DOCS = GPL compliance-nntp config-design config-semantics config-syntax \ + external-auth history hook-perl hook-python hook-tcl sample-control + +DIRS = man + +all: +clobber clean distclean: +tags ctags: +profiled: +depend: + +install: install-doc + @for D in $(DIRS) ; do \ + cd $$D && $(MAKE) install || exit 1 ; cd .. ; \ + done + +install-doc: + for F in $(TOPDOCS) ; do \ + $(CP_RPUB) $(top)/$$F $D$(PATHDOC)/$$F ; \ + done + for F in $(DOCS) ; do \ + $(CP_RPUB) $$F $D$(PATHDOC)/$$F ; \ + done + if [ -r $(top)/README.snapshot ] ; then \ + $(CP_RPUB) $(top)/README.snapshot $D$(PATHDOC)/README.snapshot ; \ + fi diff --git a/doc/checklist b/doc/checklist new file mode 100644 index 0000000..144c0ee --- /dev/null +++ b/doc/checklist @@ -0,0 +1,210 @@ +Introduction + + $Id: checklist 5912 2002-12-03 05:31:11Z vinocur $ + + This is an installation checklist written by Rebecca Ore, intended to be + the beginning of a different presentation of the information in INSTALL, + since getting started with installing INN can be complex. Further + clarifications, updates, and expansion are welcome. + +Setup + + * Make sure there is a "news" user (and a "news" group) + + * Create a home directory for news (perhaps /usr/local/news/) and make + sure it (and subdirectories) are owned by "news", group "news". + + You want to be careful that things in that directory stay owned by + "news" -- but you can't just "chown -R news.news" after the install, + because you may have binaries that are SUID root. You can do the + build as any user, because "make install" will set the permissions + correctly. After that point, though, you may want to "su news" to + avoid creating any files as root. (For routine maintenance once INN + is working, you can generally be root.) + + * If necessary, add ~news/bin to the news user's path and ~news/man to + the news user's manpath in your shell config files. (You may want + to do this, especially the second part, on your regular account; the + manpages are very useful.) + + You can do this now or later, but you will certainly want the + manpages to help with configuring INN. + + For bash, try: + + PATH=~news/bin:$PATH + export PATH + MANPATH=~news/man:$MANPATH + export MANPATH + + or csh: + + setenv PATH ~news/bin:$PATH + setenv MANPATH ~news/man:$MANPATH + + although if you don't already have MANPATH set, the above may give + an error or override your defaults (making it so you can only read + the news manpages); if "echo $MANPATH" does not give some reasonable + path, you'll need to look up what the default is for your system + (such as /usr/man or /usr/share/man). + +Compile + + * Download the INN tarball and unpack. + + * Work out configure options ("./configure --help" for a list). If + you aren't working out of /usr/local/news, or want to put some files + on a different partition, you can set the directories now (or later + in inn.conf if you change your mind). + + You probably want "--with-perl". If you're not using NetBSD with + cycbuffs or OpenBSD, perhaps "--with-tagged-hash". You might want + to compile in SSL and Berkeley DB, if your system supports them. + + ./configure --with-perl ... + make + + su + make install + + (If you do the last step as root, all of the ownerships and + permissions will be correct.) + +Configure + + * Find INSTALL and open a separate window for it. A printout is + probably a good idea -- it's long but very helpful. Any time the + instructions below ask you to make a decision, you can probably find + help in INSTALL. + + * Now it's time to work on the files in ~news/etc/. Start with + inn.conf; you must fill in the default moderators address, your + fully qualified domain names and path. Fill in all the blanks. + Change the file descriptor limits to something like 500. + + * If using cycbuffs (the CNFS storage method), open cycbuff.conf in + one window and a shell in another to create the cycbuff as described + in INSTALL. As you create them, record in cycbuff.conf the paths + and sizes. Save paths and sizes in a separate text file on another + machine in case you ever blow away the wrong file. + + Name the metacycbuff, then configure storage.conf. + + * In storage.conf, be sure that all sizes of articles can be + accomodated. If you want to throw away large articles, do it + explicitly by using the "trash" storage method. + + * The default options in expire.ctl work fine if you have cycbuffs, if + not, configure to suit. + + * Check over moderators and control.ctl. + + * Run ~news/bin/inncheck and fix anything noted. + + Inncheck gives a rough check on the appropriateness of the + configuration files as you go. (It's the equivalent of "perl -cw + yourfile.pl" for perl scripts.) + + Note that inncheck is very conservative about permissions; there's + no reason most of the config files can't be world-readable if you + prefer that. + + * Import an active file (~news/db/active) and run inncheck again. + Change where noted (there's a gotcha in the ISC's active list 000000 + 000000 (whatever number of zeros) should be 0000000 00000001). + + * Create empty initial db files. Be sure these end up owned by news. + + cd ~news/db + + touch newsgroups + touch active.times + + touch history + ~news/bin/makedbz -i + mv history.n.hash history.hash + mv history.n.index history.index + mv history.n.dir history.dir + + chmod 644 * + + * Create the cron jobs and make the changes to your system's + syslog.conf as noted in INSTALL. Also create the cron job for + nntpsend if you've chosen that over innfeed. + + Create the log files. + + * For the time being, we can see if everything initially works without + worrying about feeds or reader access. + +Run + + * Start inn by running ~news/bin/rc.news *as the news user*. + + Check ~news/log/news.notice to see if everything went well, also use + "ps" to see if innd is running. + + "telnet localhost 119" and you should see either a welcome banner or + a "no permission to talk" message. If not, investigate. + + * "man ctlinnd" now; you'll use "ctlinnd reload" as you complete your + configuration. + +Feeds + + All of this can be done while INN is running. + + * To get your incoming feeds working, edit incoming.conf. When done, + "ctlinnd reload incoming.conf reason" (where "reason" is some text + that will show up in the logs, anything will do). + + * To get your outgoing feeds working, decide whether to use innfeed or + nntpsend. Edit newsfeeds and either innfeed.conf or nntpsend.ctl. + + In newsfeeds, if using innfeed, use the option which doens't require + you to do a separate innfeed configuration unless you know more than + I do. + + Then "ctlinnd reload newsfeeds reason". + + * In readers.conf, remember that auth and access can be separated. + + Begin with auth. Your auth for password users could look like this: + + auth "foreignokay" { + auth: "ckpasswd -d ~news/db/newsusers" + default: "" + } + + There is a perl script in the ckpasswd man page if you want to do + authentications by password and have the appropriate libraries. + Copy it to ~news/bin, name the file something like makepasswd.pl and + change the internal paths to whatever you're using and wherever + you're putting the newsusers database. The standard Apache + "htpasswd" tool also works just fine to create INN password files. + + Follow with the access stanzas. Something for people with + passwords: + + access "generalpeople" { + users: "*" + newsgroups: "*,!junk,!control,!control.*" + } + + And then something like one of the following two, depending on + whether unauthenticated users get any access: + + access "restrictive" { + users: "" + newsgroups: "!*" + } + + access "readonly" { + users: "" + read: "local.*" + post: "!*" + } + + You don't need to reload anything after modifying readers.conf; + every time an nnrpd launches it reads its configuration from disk. + diff --git a/doc/compliance-nntp b/doc/compliance-nntp new file mode 100644 index 0000000..403e104 --- /dev/null +++ b/doc/compliance-nntp @@ -0,0 +1,320 @@ +$Id: compliance-nntp 6817 2004-05-18 09:25:55Z rra $ + +The following are outstanding issues regarding INN's compliance with the +NNTP standard. The reference documents used in this analysis are the +current NNTP IETF Working Group draft (draft-ietf-nntpext-base-15.txt at +the time of the last check of this audit) or RFC 2980, not RFC 977 (which +is woefully out of date). + +This file documents only compliance issues with the latest version of the +standard NNTP protocol. It does not cover INN's private extensions or +INN's implementation of widely available extensions not documented in the +NNTP standard. Specifically, it does not cover the extensions listed in +RFC 2980. + +------------------------------ + + Summary: innd doesn't require whitespace between commands and arguments + Standard: draft-ietf-nntpext-base-15.txt, section 4 + Version: 1.0 to CURRENT 2002-12-26 +Reference: innd/nc.c NCproc() and command handlers + Severity: Accepts invalid input + +The standard states: + + Keywords and arguments MUST be each separated by one or more US-ASCII + SPACE or US-ASCII TAB characters. + +This is not checked in NCproc or in the individual command handlers in +innd. Commands followed immediately by their argument will be accepted by +innd. For example: + + stat<9k6vjk.hg0@example.com> + 223 0 @0301543531000000000000079AAE0000006A@ + +Impact: Should one command be a prefix of another, innd could dispatch +the handling of the command to the wrong handler, treating the remainder +of the command verb as an argument. This laxness also encourages sloppy +client code. Internally, the lack of argument parsing in NCproc also +results in code duplication in all of the command handlers. + +Suggested fix: Lift the argument parsing code into a function called from +NCproc, breaking the command line into a vector of command and arguments. +This will work for all commands implemented by innd and will simplify the +implementation of command handlers, as well as fixing this problem. This +is what nnrpd already does. + +Impact of fix: It's possible that some serving code is relying on this +behavior and not sending spaces after commands. Fixing this problem would +break interoperability with that code. + +------------------------------ + + Summary: INN doesn't check argument length + Standard: draft-ietf-nntpext-base-15.txt, section 4 + Version: 1.0 to CURRENT 2002-12-26 +Reference: innd/nc.c and nnrpd/nnrpd.c + Severity: Accepts invalid input + +The standard says: + + Arguments MUST NOT exceed 497 octets. + +This is not checked by either innd or nnrpd, although both do check that +the command itself does not exceed 512 octets. + +Impact: Small. May accept invalid commands in extremely rare edge cases. + +Suggested fix: Probably not worth fixing separately, although if standard +command parsing code is written to handle both innd and nnrpd, it wouldn't +hurt to check this along with everything else. + +------------------------------ + + Summary: Reply codes other than x9x used for private extensions + Standard: draft-ietf-nntpext-base-15.txt, section 4.1 + Version: 1.0 to CURRENT 2002-12-26 +Reference: include/nntp.h + Severity: Violates SHOULD + +The standard says: + + Response codes not specified in this standard MAY be used for any + installation-specific additional commands also not specified. These + SHOULD be chosen to fit the pattern of x9x specified above. + +INN uses quite a few response codes that do not fit this pattern for +various extensions. Some of these will likely later be standardized with +the response codes that INN uses (the streaming commands, the +authentication reply codes, and possibly the STARTTLS reply codes), but +the rest (XGTITLE, MODE CANCEL, and XBATCH) should have used response +codes in the x9x range. + +Impact: Additional ambiguity over the meaning of reply codes, as those +reply codes could later be standardized as the reply codes for other +commands. + +Suggested fix: For XGTITLE and probably XBATCH, there is no way to fix +this now. Changing the reply codes would break all existing +implementations. It may still be possible to change the reply codes for +MODE CANCEL (which should probably be MODE XCANCEL), but it may not be +worth the effort. + +------------------------------ + + Summary: innd may return 480 instead of 500 for unrecognized commands + Standard: draft-ietf-nntpext-base-15.txt, section 4.1.1 + Version: 1.0 to CURRENT 2002-12-26 +Reference: innd/nc.c NCauthinfo() + Severity: Violates MUST + +The standard says: + + If the command is not recognized, or it is an optional command or + extension that is not implemented by the server, the response code 500 + MUST be returned. + +In innd, if the connection is determined to need authentication, all +incoming commands other than MODE are handed off to NCauthinfo() rather +than their normal command handlers. NCauthinfo() responds with a 480 +reply code to anything other than AUTHINFO USER, AUTHINFO PASS, or QUIT. + +Impact: Unlikely to cause problems in practice, but may confuse clients +that don't understand the rarely used innd-level authentication +mechanisms. + +Suggested fix: Restructure the command table so that each command also +has a flag indicating whether it requires authentication for peers that +are required to authenticate. (Some commands, like HELP and MODE READER, +should be allowed without authentication.) Then eliminate the special +casing of the state CSgetauth (it may be better to store whether the peer +has authenticated in the channel rather than in the channel state) and the +special handling in NCauthinfo. This should also simplify the code. + +------------------------------ + + Summary: innd always sends 200 for an accepted connection + Standard: draft-ietf-nntpext-base-15.txt, section 7.1 + Version: 1.0 to CURRENT 2002-12-26 +Reference: innd/nc.c NCsetup() and rc.c RCreader() + Severity: Violates MUST + +The standard says: + + If the server will accept further commands from the client including + POST, the server MUST present a 200 greeting code. If the server will + accept further commands from the client, but it is not authorized to + post articles using the POST command, the server MUST present a 201 + greeting code. + +The implication is that the greeting code from innd (which doesn't +implement POST and therefore is never going to allow it) should always be +201, at least for the case where innd never spawns nnrpd. In the case +where innd spawns nnrpd, it's unclear what the greeting code should be. + +The current implementation nevers send 201 unless one knows for certain +that the connection will never be allowed to issue a POST command, which +means that innd always sends 200. + +It's unknown whether there is any transit news software that would have +difficulties with a 201 greeting. Both innxmit and innfeed handle it +correctly in CURRENT 2001-07-04 and NNTPconnect() handles it correctly in +INN 1.0, so it seems likely that if any such software exists, it's rare. + +Impact: It's almost certain that the current innd behavior isn't hurting +anything. Even a confused client that thought 200 meant that it could +send a POST command would then try and be rejected with no damage done. + +Suggested fix: The purpose of this return code is to give a hint to a +reading client indicating whether it should even attempt POST, since +attempting it may involve a lot of work by the user only to have the post +rejected. It's only relevant to reading connections, not to transit +connections. + +It's known that some clients, upon seeing a 201 response, will never +attempt POST, even if MODE READER then returns 200. Therefore innd, when +handing off connections to nnrpd, must return 200 to not confuse a client +that will later send MODE READER. For connections where innd won't do +that handoff, it makes sense to always send 201 if all transit feeds can +handle that and won't interpret it as unwillingness to accept IHAVE or +streaming feeds. + +RCreader() should therefore be modified to send 201 if noreader is set, +and otherwise send 200. + +Impact of fix: Any feeding software that didn't consider 201 to be a +valid greeting would be unable to feed a fixed innd unless that innd also +allowed reading connections. + +------------------------------ + + Summary: innd doesn't support LIST EXTENSIONS + Standard: draft-ietf-nntpext-base-15.txt, section 8.1 + Version: 1.0 to CURRENT 2002-12-26 +Reference: innd/nc.c NClist() + Severity: Not a violation + +Support for LIST EXTENSIONS is optional, and innd's current behavior +(returning a 500 response) is permitted by the standard, but it means that +innd cannot advertise any of the extensions that it supports. Since this +will eventually include streaming, support should be added. + +Suggested fix: Add support for LIST EXTENSIONS to NClist() as soon as +innd supports a registered extension or as soon as there is documentation +for INN's extensions that specify an extension name (beginning with X). + +------------------------------ + + Summary: nnrpd doesn't return 423 errors when there is no overview info + Standard: draft-ietf-nntpext-base-17.txt, section 10.5.1.2 + Version: 1.4 to CURRENT 2003-05-04 +Reference: nnrpd/article.c CMDxover() + Severity: Violates a MUST + +The standard says: + + If there are no articles in the range specified, a 423 response MUST be + returned. + +nnrpd (from the beginning of the XOVER command) has always returned a 224 +response with an empty multiline response instead. INN doesn't support +OVER yet so this isn't actually a bug in INN, but eventually the XOVER +implementation will also be used for OVER. + +Impact: Less information is communicated to the client about why there +are no overview records returned. An error response indicating there are +no valid articles in that range is possibly more informative. + +Suggested fix: Don't print out the initial 224 message until at least one +overview entry has been found, so that CMDxover() can print a 420 response +instead if no overview records are found. + +Impact of fix: May confuse some clients that don't expect to get 420 +errors back from overview queries. It may be necessary to do something +different for OVER (where clients should expect this behavior since OVER +is a new command) than for XOVER (where clients may be relying on the +existing behavior. + +------------------------------ + + Summary: HDR can return message IDs rather than article numbers + Standard: draft-ietf-nntpext-base-17.txt, section 10.6.1.2 + Version: 1.0 to CURRENT 2003-05-04 +Reference: nnrpd/article.c CMDpat() + Severity: Violates a protocol description + +The standard says: + + The line consists of the article number, a US-ASCII space, and then the + contents of the header (without the header name or the colon and space + that follow it) or metadata item. If the article is specified by + message-id rather than by article range, the article number is given as + "0". + +nnrpd instead returns the message ID as the first word of the line when +HDR is given a message ID argument. + +Impact: A client may not be able to parse the output of HDR correctly, +since the first word is not a number. + +Suggested fix: Change the code to return 0 as the first word instead of +the message ID, per the standard. + +Impact of fix: Clients that are expecting the message ID may be +confused. + +------------------------------ + + Summary: innd improperly caches DNS returns + Standard: draft-ietf-nntpext-base-15.txt, section 14.4 + Version: 1.0 to CURRENT 2002-12-26 +Reference: innd/rc.c RCreadfile() and elsewhere + Severity: Violates a MUST + +The standard says: + + If NNTP clients or servers cache the results of host name lookups in + order to achieve a performance improvement, they MUST observe the TTL + information reported by DNS. + +innd caches DNS lookups when reading incoming.conf and doesn't refresh its +knowledge of DNS except when incoming.conf is reloaded. + +Impact: An explicit reload is required whenever the IP address of any +peer changes, and in the presence of network renumbering innd is +vulnerable to spoofing if DNS is the only authentication mechanism used. + +Suggested fix: This is hard to fix without unacceptable performance +impact. The only good fix is to either fork a separate helper process to +do DNS lookups (since gethostbyname may block for essentially an +arbitrarily long period) or to use the direct resolver library so that one +can get access to a file descriptor and throw it into the select loop. +Either way, this requires keeping a DNS file descriptor in the main select +loop and updating knowledge of DNS periodically, which is a substantial +amount of additional complexity. + +------------------------------ + + Summary: innd doesn't diagnose repeated AUTHINFO USER commands + Standard: RFC 2980, section 3.1.1 + Version: 1.0 to CURRENT 2002-12-26 +Reference: innd/nc.c NCauthinfo() + Severity: Violates a protocol description + +RFC 2980 says: + + The 482 code will also be returned when the AUTHINFO commands are not + entered in the correct sequence (like two AUTHINFO USERs in a row, or + AUTHINFO PASS preceding AUTHINFO USER). + +innd ignores AUTHINFO USER and just always returns a 381 response, however, +since it doesn't care about the username. + +Impact: Probably none. + +Suggested fix: A long-term solution would be to add real authentication +to innd, in which case it would start caring about the authenticated +identity (and perhaps use that identity to map to an incoming.conf entry). +It's unclear if this would be worthwhile. Failing that, innd would need +to keep internal state to know whether AUTHINFO USER had already been +sent. diff --git a/doc/config-design b/doc/config-design new file mode 100644 index 0000000..eda5bf3 --- /dev/null +++ b/doc/config-design @@ -0,0 +1,121 @@ +$Id: config-design 4805 2001-06-21 10:52:27Z rra $ + +This file is documentation of the design principles that went into INN's +configuration file syntax, and some rationale for why those principles +were chosen. + + 1. All configuration files used by INN should have the same syntax. + This was the root reason why the project was taken on in the first + place; INN developed a proliferation of configuration files, all of + which had a slightly (or greatly) different syntax, forcing the + administrator to learn several different syntaxes and resulting in a + proliferation of parsers, all with their own little quirks. + + 2. Adding a new configuration file or a new set of configuration options + should not require writing a single line of code for syntax parsing. + Code that analyzes the semantics of the configuration will of course + be necessary, but absolutely no additional code to read files, parse + files, build configuration trees, or the like should be required. + Ideally, INN should have a single configuration parser that + everything uses. + + 3. The syntax should look basically like the syntax of readers.conf, + incoming.conf, and innfeed.conf in INN 2.3. After extensive + discussion on the inn-workers mailing list, this seemed to be the + most generally popular syntax of the ones already used in INN, and + inventing a completely new syntax didn't appear likely to have gains + outweighing the effort involved. This syntax seemed sufficiently + general to represent all of the configuration information that INN + needed. + + 4. The parsing layer should *not* attempt to do semantic analysis of the + configuration; it should concern itself solely with syntax (or very + low-level semantics that are standard across all conceivable INN + configuration files). In particular, the parsing layer should not + know what parameters are valid, what groups are permitted, what types + the values for parameters should have, or what default values + parameters have. + + This principle requires some additional explanation, since it is very + tempting to not do things this way. However, the more semantic + information the parser is aware of, the less general the parser is, + and it's very easy to paint oneself into a corner. In particular, + it's *not* a valid assumption that all clients of the parsing code + will want to reduce the configuration to a bunch of structs; this + happens to be true for most clients of inn.conf, for example, but + inndstart doesn't want the code needed to reduce everything to a + struct and set default values to necessarily be executed in a + security-critical context. + + Additionally, making the parser know more semantic information either + complicates (significantly) the parser interface or means that the + parser has to be modified when the semantics change. The latter is + not acceptable, and the parser interface should be as straightforward + as possible (to encourage all parts of INN to use it). + + 5. The result of a parse of the configuration file may be represented as + a tree of dictionaries, where each dictionary corresponds to a group + and each key corresponds to a parameter setting. (Note that this does + not assume that the underlying data structure is a hash table, just + that it has dictionary semantics, namely a collection of key/value + pairs with the keys presumed unique.) + + 6. Parameter values inherit via group nesting. In other words, if a + group is nested inside another group, all parameters defined in the + enclosing group are inherited by the nested group unless they're + explicitly overriden within the nested group. (This point and point + 5 are to some degree just corollaries of point 3.) + + 7. The parsing library must permit writing as well as reading. It must + be possible for a program to read in a configuration file, modify + parameters, add and delete groups, and otherwise change the + configuration, and then write back out to disk a configuration file + that preserves those changes and still remains as faithful to the + original (possibly human-written) configuration file as possible. + (Ideally, this would extend to preserving comments, but that may be + too difficult to do and therefore isn't required.) + + 8. The parser must not limit the configuration arbitrarily. In + particular, unlimited length strings (within available memory) must + be supported for string values, and if allowable line length is + limited, line continuation must be supported everywhere that there's + any reasonable expectation that it might be necessary. One common + configuration parameter is a list of hosts or host wildmats that can + be almost arbitrarily long, and the syntax and parser must support + that. + + 9. The parser should be reasonably efficient, enough so as to not cause + an annoying wait for command-line tools like sm and grephistory to + start. In general, though, efficiency in either time or memory is + not as high of a priority as readable, straightforward code; it's + safe to assume that configuration parsing is only done on startup and + at rare intervals and is not on any critical speed paths. + +10. Error reporting is a must. It must be possible to clearly report + errors in the configuration files, including at minimum the file name + and line number where the error occurred. + +11. The configuration parser should not trust its input, syntax-wise. It + must not segfault, infinitely loop, or otherwise explode on malformed + or broken input. And, as a related point, it's better to be + aggressively picky about syntax than to be lax and attempt to accept + minor violations. The intended configuration syntax is simple and + unambiguous, so it should be unnecessary to accept violations. + +12. It must be possible to do comprehensive semantic checks of a + configuration file, including verifying that all provided parameters + are known ones, all parameter values have the correct type, group + types that are not expected to be repeated are not, and only expected + group types are used. This must *not* be done by the parser, but the + parser must provide sufficient hooks that the client program can do + this if it chooses. + +13. The parser must be re-entrant and thread-safe. + +14. The grammar shouldn't require any lookahead to parse. This is in + order to keep the parser extremely simple and therefore maintainable. + (It's worth noting that this design principle leads to the + requirement that parameter keys end in a colon; the presence of the + colon allows parameter keys to be distinguished from other syntactic + elements allowed in the same scope, like the beginning of a nested + group.) diff --git a/doc/config-semantics b/doc/config-semantics new file mode 100644 index 0000000..49d601e --- /dev/null +++ b/doc/config-semantics @@ -0,0 +1,79 @@ +$Id: config-semantics 4792 2001-06-21 08:59:39Z rra $ + +Groups in a configuration file have a well-defined order, namely the order +in which the groups would be encountered in a depth-first traversal of the +parse tree. + +The supported operations on a configuration file parse tree for reading +are: + + * Search. Find the first group of a given type in a given tree. This is + done via depth-first search. + + * Next. Find the next group of a given type, starting from some group. + This is done via depth-first search. + + * Query. Look up the value of a given parameter in a given group (with + inheritance). Note that the expected type of the parameter value must + be provided by the caller; the parsing library doesn't know the types + of parameters. + + * Prune. Limit one's view of the configuration file to only a given set + of group types and everything underneath them; any other group types + encountered won't be parsed (and therefore everything under them, even + groups of the wanted type, won't be seen). + +Therefore, the *only* significance of nested group structure is parameter +inheritence and pruning. In the absence of pruning, it would always be +possible, by duplicating parameter settings that were inherited and laying +out the groups in depth-first traversal order, to transform any +configuration file into an entirely equivalent one that contains no nested +groups. This isn't true in the presence of pruning, but pruning is +intended to be used primarily for performance (ignoring the parts of the +configuration that don't apply to a given parsing library client). + +The expected way for clients to use the parsing library is to follow one +of these two access patterns: + + * Search for a particular configuration group and then query it for a set + of parameters (either one by one as they're used, or all at once to + collapse the parameters into a struct for faster access later). This + is expected to be the common pattern for finding and looking up + settings for a particular program. There will generally only be a + single group per group type for groups of this sort; it doesn't make + sense to have multiple groups setting general configuration options for + a program and have to iterate through them and merge them in some + fashion. + + * Iterate through all groups of a given type, building a list of them (or + of the data they contain). This is the model used by, for example, + storage classes; each storage class has a set of parameters, and the + storage subsystem needs to know about the full list of classes. + +Note that neither of these operations directly reveal the tree structure; +the tree structure is intended for the convenience of the user in setting +defaults for various parameters so that they don't have to be repeated in +each group, and to allow some top-level pruning. It's not intended to be +semantically significant other than that. + +Here are some suggested general conventions: + + * General options for a particular program should be separated out into a + their own group. For example, a group innwatch in inn.conf to set the + various options only used by innwatch. Note that pruning is inclusive + rather than exclusive, so programs should ideally only need to care + about a short list of groups. + + * Groups used only for grouping and setting default parameters, ones that + won't be searched for explicitly by any program, should use the type + "group". This can be used uniformly in all configuration files so that + whenever a user sees a group of type "group", they know that it's just + syntactic convenience to avoid having to repeat a bunch of parameter + settings and isn't otherwise significant. + + * Groups that are searched for or iterated through shouldn't be nested; + for example, if a configuration file defines a list of access groups, + nesting one access group inside another is discouraged (in favor of + putting both groups inside an enclosing group of type "group" that sets + the parameters they have in common). This is to cut down on user + confusion, since otherwise the nesting appears to be significant. diff --git a/doc/config-syntax b/doc/config-syntax new file mode 100644 index 0000000..4863d5d --- /dev/null +++ b/doc/config-syntax @@ -0,0 +1,242 @@ +$Id: config-syntax 5843 2002-11-19 00:08:18Z rra $ + +This file documents the standardized syntax for INN configuration files. +This is the syntax that the parsing code in libinn will understand and the +syntax towards which all configuration files should move. + +The basic structure of a configuration file is a tree of groups. Each +group has a type and an optional tag, and may contain zero or more +parameter settings, an association of a name with a value. All parameter +names and group types are simple case-sensitive strings composed of +printable ASCII characters and not containing whitespace or any of the +characters "\:;{}[]<>" or the double-quote. A group may contain another +group (and in fact the top level of the file can be thought of as a +top-level group that isn't allowed to contain parameter settings). + +Supported parameter values are booleans, integers, real numbers, strings, +and lists of strings. + +The basic syntax looks like: + + group-type tag { + parameter: value + parameter: [ string string ... ] + # ... + + group-type tag { + # ... + } + } + +Tags are strings, with the same syntax as a string value for a parameter; +they are optional and may be omitted. A tag can be thought of as the name +of a particular group, whereas the says what that group is +intended to specify and there may be many groups with the same type. + +The second parameter example above has as its value a list. The square +brackets are part of the syntax of the configuration file; lists are +enclosed in square brackets and the elements are space-separated. + +As seen above, groups may be nested. + +Multiple occurances of the same parameter in the parameter section of a +group is an error. In practice, the second parameter will take precedent, +but an error will be reported when such a configuration file is parsed. + +Parameter values inherit. In other words, the structure: + + first { + first-parameter: 1 + second { + second-parameter: 1 + third { third-parameter: 1 } + } + + another "tag" { } + } + +is parsed into a tree that looks like: + + +-------+ +--------+ +-------+ + | first |-+-| second |---| third | + +-------+ | +--------+ +-------+ + | + | +---------+ + +-| another | + +---------+ + +where each box is a group. The type of the group is given in the box; +none of these groups have tags except for the only group of type +"another", which has the tag "tag". The group of type "third" has three +parameters set, namely "third-parameter" (set in the group itself), +"second-parameter" (inherited from the group of type "second"), and +"first-parameter" (inherited from "first" by "second" and then from +"second" by "third"). + +The practical meaning of this is that enclosing groups can be used to set +default values for a set of subgroups. For example, consider the +following configuration that defines three peers of a news server and +newsgroups they're allowed to send: + + peer news1.example.com { newsgroups: * } + peer news2.example.com { newsgroups: * } + peer news3.example.com { newsgroups: * } + +This could instead be written as: + + group { + newsgroups: * + + peer news1.example.com { } + peer news2.example.com { } + peer news3.example.com { } + } + +or as: + + peer news1.example.com { + newsgroups: * + + peer news2.example.com { } + peer news3.example.com { } + } + +and for a client program that only cares about the defined list of peers, +these three structures would be entirely equivalent; all questions about +what parameters are defined in the peer groups would have identical +answers either way this configuration was written. + +Note that the second form above is preferred as a matter of style to the +third, since otherwise it's tempting to derive some significance from the +nesting structure of the peer groups. Also note that in the second +example above, the enclosing group *must* have a type other than "peer"; +to see why, consider the program that asks the configuration parser for a +list of all defined peer groups and uses the resulting list to build some +internal data structures. If the enclosing group in the second example +above had been of type peer, there would be four peer groups instead of +three and one of them wouldn't have a tag, probably provoking an error +message. + +Boolean values may be given as yes, true, or on, or as no, false, or off. +Integers must be between -2,147,483,648 and +2,147,483,647 inclusive (the +same as the minimums for a C99 signed long). Floating point numbers must +be between 0 and 1e37 in absolute magnitude (the same as the minimums for +a C99 double) and can safely expect eight digits of precision. + +Strings are optionally enclosed in double quotes, and must be quoted if +they contain any whitespace, double-quote, or any characters in the set +"\:;[]{}<>". Escape sequences in strings (sequences beginning with \) are +parsed the same as they are in C. Strings can be continued on multiple +lines by ending each line in a backslash, and the newline is not +considered part of such a continued string (to embed a literal newline in +a string, use \n). + +Lists of strings are delimited by [] and consist of whitespace-separated +strings, which must follow the same quoting rules as all other strings. +Group tags are also strings and follow the same quoting rules. + +There are two more bits of syntax. Normally, parameters must be separated +by newlines, but for convenience it's possible to put multiple parameters +on the same line separated by semicolons: + + parameter: value; parameter: value + +Finally, the body of a group may be defined in a separate file. To do +this, rather than writing the body of the group enclosed in {}, instead +give the file name in <>: + + group tag + +(The filename is also a string and may be double-quoted if necessary, but +since file names rarely contain any of the excluded characters it's rarely +necessary.) + +Here is the (almost) complete ABNF for the configuration file syntax. +The syntax is per RFC 2234. + +First the basic syntax elements and possible parameter values: + + newline = %d13 / %d10 / %d13.10 + ; Any of CR, LF, or CRLF are interpreted + ; as a newline. + + comment = *WSP "#" *(WSP / VCHAR / %x8A-FF) newline + + WHITE = WSP / newline [comment] + + boolean = "yes" / "on" / "true" / "no" / "off" / "false" + + integer = ["-"] 1*DIGIT + + real-number = ["-"] 1*DIGIT "." 1*DIGIT [ "e" ["-"] 1*DIGIT ] + + non-special = %x21 / %x23-39 / %x3D / %x3F-5A / %x5E-7A + / %x7C / %x7E / %x8A-FF + ; All VCHAR except "\:;<>[]{} + + quoted-string = DQUOTE 1*(WSP / VCHAR / %x8A-FF) DQUOTE + ; DQUOTE within the quoted string must be + ; written as 0x5C.22 (\"), and backslash + ; sequences are interpreted as in C + ; strings. + + string = 1*non-special / quoted-string + + list-body = string *( 1*WHITE string ) + + list = "[" *WHITE [ list-body ] *WHITE "]" + +Now the general structure: + + parameter-name = 1*non-special + + parameter-value = boolean / integer / real-number / string / list + + parameter = parameter-name ":" 1*WSP parameter-value + + parameter-list = parameter [ *WHITE (";" / newline) *WHITE parameter ] + + group-list = group *( *WHITE group ) + + group-body = parameter-list [ *WHITE newline *WHITE group-list ] + / group-list + + group-file = string + + group-contents = "{" *WHITE [ group-body ] *WHITE "}" + / "<" group-file ">" + + group-type = 1*non-special + + group-tag = string + + group-name = group-type [ 1*WHITE group-tag ] + + group = group-name 1*WHITE group-contents + + file = *WHITE *( group *WHITE ) + +One implication of this grammar is that any line outside a quoted string +that begins with "#", optionally preceded by whitespace, is regarded as a +comment and discarded. The line must begin with "#" (and optional +whitespace); comments at the end of lines aren't permitted. "#" has no +special significance in quoted strings, even if it's at the beginning of a +line. Note that comments cannot be continued to the next line in any way; +each comment line must begin with "#". + +It's unclear the best thing to do with high-bit characters (both literal +characters with value > 0x7F in a configuration file and characters with +such values created in quoted strings with \, \x, \u, or \U). In +the long term, INN should move towards assuming UTF-8 everywhere, as this +is the direction that all of the news standards are heading, but in the +interim various non-Unicode character sets are in widespread use and there +must be some way of encoding those values in INN configuration files (so +that things like the default Organization header value can be set +appropriately). + +As a compromise, the configuration parser will pass unaltered any literal +characters with value > 0x7F to the calling application, and \ and +\x escapes will generate eight-bit characters in the strings (and +therefore cannot be used to generate UTF-8 strings containing code points +greater than U+007F). \u and \U, in contrast, will generate characters +encoded in UTF-8. diff --git a/doc/external-auth b/doc/external-auth new file mode 100644 index 0000000..28fc847 --- /dev/null +++ b/doc/external-auth @@ -0,0 +1,111 @@ +NNRPD External Authentication Support + + This is $Revision: 7880 $ dated $Date: 2005-03-17 12:42:46 +0100 (Thu, + 17 Mar 2005) $. + + A fundamental part of the readers.conf(5)-based authorization mechanism + is the interface to external authenticator and resolver programs. This + interface is documented below. + + INN ships with a number of such programs (all written in C, although any + language can be used). Code for them can be found in authprogs/ of the + source tree; the authenticators are installed to *pathbin*/auth/passwd, + and the resolvers are installed to *pathbin*/auth/resolv. + +Reading information from nnrpd + + When nnrpd spawns an external auth program, it passes information on + standard input as a sequence of "key: value" lines. Each line ends with + CRLF, and a line consisting of only "." indicates the end of the input. + The order of the fields is not significant. Additional fields not + mentioned below may be included; this should not be cause for alarm. + + (For robustness as well as ease of debugging, it is probably wise to + accept line endings consisting only of LF, and to treat EOF as + indicating the end of the input even if "." has not been received.) + + Code which reads information in the format discussed below and parses it + into convenient structures is available authenticators and resolvers + written in C; see libauth(3) for details. Use of the libauth library + will make these programs substantially easier to write and more robust. + + For authenticators + + When nnrpd calls an authenticator, the lines it passes are + + ClientAuthname: user\r\n + ClientPassword: pass\r\n + + where *user* and *pass* are the username and password provided by the + client (e.g. using AUTHINFO). In addition, nnrpd generally also passes + the fields mentioned as intended for resolvers; it rare instances this + data may be useful for authenticators. + + For resolvers + + When nnrpd calls a resolver, the lines it passes are + + ClientHost: hostname\r\n + ClientIP: IP-address\r\n + ClientPort: port\r\n + LocalIP: IP-address\r\n + LocalPort: port\r\n + .\r\n + + where *hostname* indicates a string representing the hostname if + available, *IP-address* is a numeric IP address (dotted-quad for IPv4, + equivalent for IPv6 if appropriate), and *port* is a numeric port + number. (The *LocalIP* paramter may be useful for determining which + interface was used for the incoming connection.) + + If information is not available, nnrpd will omit the corresponding + fields. In particular, this applies to the unusual situation of nnrpd + not being connected to a socket; TCP-related information is not + available for standard input. + +Returning information to nnrpd + + Exit status and signals + + The external auth program must exit with a status of 0 to indicate + success; any other exit status indicates failure. (The non-zero exit + value will be logged.) + + If the program dies due to catching a signal (for example, a + segmentation fault occurs), this will be logged and treated as a + failure. + + Returning a username and domain + + If the program succeeds, it must return a username string (optionally + with a domain appended) by writing to standard output. The line it + should write is exactly: + + user:username\r\n + + where *username* is the string that nnrpd should use in matching + readers.conf access blocks. + + There should be no extra spaces in lines sent from the hook to nnrpd; + "user:aidan" is read by nnrpd as a different username than "user: + aidan". + +Error messages + + As mentioned above, errors can be indicated by a non-zero exit value, or + termination due to an unhandled signal; both cases are logged by nnrpd. + However, external auth programs may wish to log error messages + separately. + + Although nnrpd will syslog() anything an external auth program writes to + standard error, it is generally better to use the messages.h functions, + such as warn() and die(). + + Please use the ckpasswd.c program as an example for any authenticators + you write, and ident.c as an example for any resolvers. + +HISTORY + + Written by Aidan Cully for InterNetNews. This documentation rewritten + in POD by Jeffrey M. Vinocur . + diff --git a/doc/history b/doc/history new file mode 100644 index 0000000..b5d4cc1 --- /dev/null +++ b/doc/history @@ -0,0 +1,258 @@ +$Revision: 4165 $ +This file contains a few messages of historical interest. Some of the +information in these messages is out of date (e.g., you don't need any +other software, ihave/sendme is suported, etc); see the README and +installation manual. + +The first is a mail message I sent as soon as I got the idea. + +Six months later I had something to beta, and I posted the second message +to Usenet. My ship date was optimistic. + +The third message is the application that I required all beta sites to +fill out. + +The fourth is a copy of the release notice. + +From: Rich Salz +Date: Sat, 8 Dec 90 15:23:20 EST +Message-Id: <9012082023.AA13441@litchi.bbn.com> +To: newsgurus@ucsd.edu, nntp-managers@ucbarpa.Berkeley.EDU +Subject: Speed idea. + +Suppose inews, nntp, "rnews -U", newsunbatch, etc., all just fed their +articles to a single daemon? + +An idea I started kicking around yesterday. This is intended only for +sites supporting BSD networking. I believe that anyone else who needs +this kind of speed would find Cnews good enough. + +A multi-threaded server that used non-blocking IO to read all incoming +articles on several sockets (don't forker a server, select on the +connection socket will return READOK when a connection request comes in). +All articles are read into memory, then written out to the filesystem +using a single writev call (easy way to splice the path). + +Hash the active file and compile the sys file so as soon as an article was +accepted we can write out the batchfile entries. As one special case, +write entries to another socket for articles that should be fed out via +NNTPLINK or something. + +Put the socket inside a group-access-only directory, so that only trusted +front-ends like inews "rnews -U" etc can connect to it. + +Oh yeah, for things like nntp use sendmsg/recvmesg to hand off the +feeding site to the demon once it's authenticated the incoming call and +recognized it as an "xfer no" site. + +I've a few pages of notes and code fragments to type in. + +No locks of any kind. active file is mmap'd or periodically flushed. +Keep it all in core and blat it out with a single write. + +When you want to expire, or add a group, you send a special message +on a control port, or perhaps a sighup/sigusr1 to force it to resynch. + +Any feedback? + /r$ + +Path: papaya.bbn.com!rsalz +From: rsalz@bbn.com (Rich Salz) +Newsgroups: news.software.nntp,news.admin,comp.org.usenix +Subject: Seeking beta-testers for a new NNTP transfer system +Message-ID: <3632@litchi.bbn.com> +Date: 18 Jun 91 15:47:21 GMT +Followup-To: poster +Organization: Bolt, Beranek and Newman, Inc. +Lines: 72 +Xref: papaya.bbn.com news.software.nntp:1550 news.admin:15565 comp.org.usenix:418 + +InterNetNews, or INN, is a news transport system. The core part of the +package is a single long-running daemon that handles all incoming NNTP +connections. It files the articles and arranges for them to be forwarded +to downstream sites. Because it is long-running, it can be directed to +spawn other long-running processes, telling them exactly when an article +should be sent to a feed. This can replace the "watch the logfile" mode +of nntplink, for example, with a much cleaner mechanism: read the +batchfile on standard input. + +InterNetNews assumes that memory is cheap and fast while disks are slow. +No temporary files are used while incoming articles are being received, +and once processed the entire article is written out using a single +writev(2) call (this includes updating the Path and Xref headers). The +active file is kept in memory (a compile-time option can be set to use +mmap(2)), and the newsfeeds file is parsed once to build a complete matrix +of which sites receive which newsgroups. + +InterNetNews uses many features of standard BSD sockets including +non-blocking I/O and Unix-domain stream and datagram sockets. It is +highly doubtful that the official version will ever provide support for +TLI, DECNET, or other facilities. + +INN is fast. Not many hard numbers are available (that is one requirement +of being a beta-site), but some preliminary tests show it to be at least +twice as fast as the current standard NNTP/C News combination. For +example, Jim Thompson at Sun has had 20 nntpxmits feeding into a 4/490, +and was getting over 14 articles per second, with the CPU 11% utilized. I +was getting 10 articles/second feeding into a DECstations 3100, with the +program (running profiled!) 50% idle and the load average under .7. (It +is a scary thing to see several articles filed with the same timestamp.) + +The sys file format is somewhat different, and has been renamed. The +arcane "foo.all" syntax is gone, replaced with a set of order-dependant +shell patterns. For example, instead of "comp,comp.sys.sun,!comp.sys" you +would write "comp.*,!comp.sys.*,comp.sys.sun"; to not get any groups +related to binaries or pictures, you write "!*pictures*,!*binaries*". + +There are other incompatibilities as well. For example, ihave/sendme +control messages are not supported. Also the philosophy is that that +invalid articles are dropped, rather than filed into "junk." (A log +message is written with the reason, and also sent back to the upstream +feed as part of the NNTP reject reply.) The active file is taken to be +the definitive list of groups that an article wants to recieve, and if +none of an article's newsgroups are mentioned in the active file, then the +article is invalid, logged, and dropped. + +The history and log files are intended to be compatible with those created +by C News. I want to thank Henry and Geoff for their kind permission to +use DBZ and SUBST. You will need to be running C News expire or a B2.11 +expire that has been modified to use DBZ. + +The InterNetNews daemon does not implement all NNTP commands. If sites +within your campus are going to post or read news via NNTP, you will need +the standard NNTP distribution. The daemon will spawn the standard nntpd +if any site not mentioned in its "hosts.nntp" file connects to the TCP +port. InterNetNews includes a replacement for the "mini-inews" that comes +with the standard NNTP distribution. This can be used on any machine that +posts news and connects to an NNTP server somewhere; its use is not +limited to INN. At some point I hope to have a replacement nntpd +optimized for newsreaders, and an NNTP transmission program. These will +remove the need for any external software beyond the C News expire program. + +If you would like to beta-test this version, please FTP the file +pub/usenet/INN.BETA from cronus.bbn.com for directions. It will be a +fairly tightly-screened beta: DO NOT ASK ME FOR COPIES! Once the system +is stable, it will be freely redistributable. I hope to have the official +release by August 7, so that schools can bring the system up before the +semester starts. + /rich $alz +-- +Please send comp.sources.unix-related mail to rsalz@uunet.uu.net. +Use a domain-based address or give alternate paths, or you may lose out. + +Thanks for your interest in InterNetNews. I want to run a fairly +tightly-controlled beta test of the software before I make it generally +available. This means that I'm going to screen the sites which will be +able to participate in the test. Please don't be offended or upset by +this whole procedure. I want to make the final package as stable as soon +as possible so that the entire net can benefit (it will be freely +redistributable). I've set up this mechanism because I think it's the +best way for me to get the best test results as quickly as possible. + +I would therefore appreciate your answers to the following questions. +If you think the answers to some of them will be obvious to me (e.g., +"Describe your organization" --> "UUNET" :-) then feel free to leave it +blank. If you have any other feedback or comments, please add them. + +Email your results to + /r$ + +What software (transport, batching, readers, etc.) do you currently run? + +How much experience do you have with Usenet and NNTP? + +Describe your organization. + +How do you plan on testing InterNetNews? Be specific, describing the +machine hardware, any test servers, etc. [The answers to this one +won't be obvious to me -- you gotta write something.] + +What are the rough counts of the upstream and downstream feeds, and how do +they break down by category (UUCP, NNTP, etc.)? + +What special news functions does your server perform (gatewaying, +archiving, etc.)? + +Do you understand that by participating in the beta-test you agree not to +redistribute the software outside of your administrative domain, and that +you promise to upgrade to the official release in a timely manner? + +From: Rich Salz +Message-Id: +Newsgroups: news.software.b,news.protocols.nntp +Subject: Announcing the release of InterNetNews + +I am pleased to announce the official release of InterNetNews. + +InterNetNews, or INN, is a news transport system. The core part of the +package is a single long-running daemon that handles all incoming NNTP +connections. It files the articles and arranges for them to be forwarded +to downstream sites. Because it is long-running, it can be directed to +spawn other long-running processes, telling them exactly when an article +should be sent to a feed. + +INN is a complete Usenet system. It provides article expiration and +archiving, NNTP transport, and UUCP support. Nntplink works fine. + +INN does not include a newsreader. It does provide a version of the NNTP +reference implementation "clientlib" routines so that rrn and other +newsreaders compile with little trouble. The next release of xrn will +include INN support. + +The spool directory is unchanged while the history database is +upwardly-compatible with that of C News and the log file is very similar. +All system configuration files are different. + +INN assumes that memory is cheap and fast while disks are slow. No +temporary files are used while incoming articles are being received, and +once processed the entire article is written out using a single system +call (this includes updating the Path and Xref headers). The active file +is kept in memory, and the newsfeeds file is parsed at start-up to build a +complete matrix of which sites receive which newsgroups. A paper +describing the implementation was presented at the June 1992 Usenix +conference. + +INN uses many features of standard BSD sockets including non-blocking +I/O. It is highly doubtful that the official version will ever provide +support for TLI, DECNET, or other facilities. Among others, INN beta +sites include ATT Unix System V Release 4, Apple A/UX, BSDI BSD/386 0.3.3, +DEC Ultrix 3.x and 4.x, HP-UX s800 8.0, IBM AIX 3.1 and 3.2, Next NeXT-OS +2.1, Pyramid OSx 5.1, SCO Xenix 2.3.4, SGI Irix 4.0, Sequent Dynix 3.0.4 +and 3.0.12, and Sun SunOS 3.5 and 4.x. + +Almost all of the beta-testers have reported faster performance and less +load once they installed INN. Many people find it easy to maintain. + +A number of sites have graciously agreed to provide FTP access to this +release. The machine names and directories are listed below. Within +those directories you will find one or more of the following files: + README Intro and unpacking instructions; + -or- a copy appears at the end of this + README.INN article. + inn1.0.tar.Z The full distribution + inn.usenix.ps.Z The Usenix paper on INN + +The sites providing access are: + cs.utexas.edu /pub/inn + ftp.cs.widener.edu /pub/inn.tar.Z (or wherever). + ftp.germany.eu.net /pub/news/inn + ftp.ira.uka.de pub/network/news + ftp.msen.com /pub/packages/inn + ftp.uu.net /pub/news/nntp/inn + gatekeeper.dec.com /pub/news/inn + grasp1.univ-lyon1.fr /pub/unix/news/inn + munnari.oz.au /pub/news/inn + sparky.Sterling.COM /news/inn + src.doc.ic.ac.uk /computing/usenet/software/transport + stasys.sta.sub.org /pub/src/inn + (Stasys also has anonymous UUCP; contact . + ucsd.edu /INN + usc.edu /pub/inn + +Discussion about INN should be posted to news.software.b and +news.software.nntp. Email should be sent to . Please +do NOT send it to -- it will only just delay your response +since I will have to forward it to UUNET. + +The README follows after the formfeed. + /r$ diff --git a/doc/hook-perl b/doc/hook-perl new file mode 100644 index 0000000..b2e33d8 --- /dev/null +++ b/doc/hook-perl @@ -0,0 +1,597 @@ +INN Perl Filtering and Authentication Support + + This is $Revision: 7880 $ dated $Date: 2008-06-07 14:46:49 +0200 (Sat, + 07 Jun 2008) $. + + This file documents INN's built-in support for Perl filtering and reader + authentication. The code is based very heavily on work by Christophe + Wolfhugel , and his work was in turn inspired by the + existing TCL support. Please send any bug reports to inn-bugs@isc.org, + not to Christophe, as the code has been modified heavily since he + originally wrote it. + + The Perl filtering support is described in more detail below. + Basically, it allows you to supply a Perl function that is invoked on + every article received by innd from a peer (the innd filter) or by nnrpd + from a reader (the nnrpd filter). This function can decide whether to + accept or reject the article, and can optionally do other, more + complicated processing (such as add history entries, cancel articles, + spool local posts into a holding area, or even modify the headers of + locally submitted posts). The Perl authentication hooks allow you to + replace or supplement the readers.conf mechanism used by nnrpd. + + For Perl filtering support, you need to have Perl version 5.004 or + newer. Earlier versions of Perl will fail with a link error at + compilation time. http://language.perl.com/info/software.html should + have the latest Perl version. + + To enable Perl support, you have to specify --with-perl when you run + configure. See INSTALL for more information. + +The innd Perl Filter + + When innd starts, it first loads the file _PATH_PERL_STARTUP_INND + (defined in include/paths.h, by default startup_innd.pl) and then loads + the file _PATH_PERL_FILTER_INND (also defined in include/paths.h, by + default filter_innd.pl). Both of these files must be located in the + directory specified by pathfilter in inn.conf + (/usr/local/news/bin/filter by default). The default directory for + filter code can be specified at configure time by giving the flag + --with-filter-dir to configure. + + INN doesn't care what Perl functions you define in which files. The + only thing that's different about the two files is when they're loaded. + startup_innd.pl is loaded only once, when innd first starts, and is + never reloaded as long as innd is running. Any modifications to that + file won't be noticed by innd; only stopping and restarting innd can + cause it to be reloaded. + + filter_innd.pl, on the other hand, can be reloaded on command (with + "ctlinnd reload filter.perl 'reason'"). Whenever filter_innd.pl is + loaded, including the first time at innd startup, the Perl function + filter_before_reload() is called before it's reloaded and the function + filter_after_reload() is called after it's reloaded (if the functions + exist). Additionally, any code in either startup_innd.pl or + filter_innd.pl at the top level (in other words, not inside a sub { }) + is automatically executed by Perl when the files are loaded. + + This allows one to do things like write out filter statistics whenever + the filter is reloaded, load a cache into memory, flush cached data to + disk, or other similar operations that should only happen at particular + times or with manual intervention. Remember, any code not inside + functions in startup_innd.pl is executed when that file is loaded, and + it's loaded only once when innd first starts. That makes it the ideal + place to put initialization code that should only run once, or code to + load data that was preserved on disk across a stop and restart of innd + (perhaps using filter_mode() -- see below). + + As mentioned above, "ctlinnd reload filter.perl 'reason'" (or "ctlinnd + reload all 'reason'") will cause filter_innd.pl to be reloaded. If the + function filter_art() is defined after the file has been reloaded, + filtering is turned on. Otherwise, filtering is turned off. (Note that + due to the way Perl stores functions, once you've defined filter_art(), + you can't undefine it just by deleting it from the file and reloading + the filter. You'll need to replace it with an empty sub.) + + The Perl function filter_art() is the heart of a Perl filter. Whenever + an article is received from a peer, via either IHAVE or TAKETHIS, + filter_art() is called if Perl filtering is turned on. It receives no + arguments, and should return a single scalar value. That value should + be the empty string to indicate that INN should accept the article, or + some rejection message to indicate that the article should be rejected. + + filter_art() has access to a global hash named %hdr, which contains all + of the standard headers present in the article and their values. The + standard headers are: + + Also-Control, Approved, Bytes, Cancel-Key, Cancel-Lock, + Content-Base, Content-Disposition, Content-Transfer-Encoding, + Content-Type, Control, Date, Date-Received, Distribution, Expires, + Face, Followup-To, From, In-Reply-To, Injection-Date, Injection-Info, + Keywords, Lines, List-ID, Message-ID, MIME-Version, Newsgroups, + NNTP-Posting-Date, NNTP-Posting-Host, Organization, Originator, + Path, Posted, Posting-Version, Received, References, Relay-Version, + Reply-To, Sender, Subject, Supersedes, User-Agent, + X-Auth, X-Canceled-By, X-Cancelled-By, X-Complaints-To, X-Face, + X-HTTP-UserAgent, X-HTTP-Via, X-Mailer, X-Modbot, X-Modtrace, + X-Newsposter, X-Newsreader, X-No-Archive, X-Original-Message-ID, + X-Original-Trace, X-Originating-IP, X-PGP-Key, X-PGP-Sig, + X-Poster-Trace, X-Postfilter, X-Proxy-User, X-Submissions-To, + X-Trace, X-Usenet-Provider, Xref. + + Note that all the above headers are as they arrived, not modified by + your INN (especially, the Xref: header, if present, is the one of the + remote site which sent you the article, and not yours). + + For example, the Newsgroups: header of the article is accessible inside + the Perl filter as $hdr{'Newsgroups'}. In addition, $hdr{'__BODY__'} + will contain the full body of the article and $hdr{'__LINES__'} will + contain the number of lines in the body of the article. + + The contents of the %hdr hash for a typical article may therefore look + something like this: + + %hdr = (Subject => 'MAKE MONEY FAST!!', + From => 'Joe Spamer ', + Date => '10 Sep 1996 15:32:28 UTC', + Newsgroups => 'alt.test', + Path => 'news.example.com!not-for-mail', + Organization => 'Spammers Anonymous', + Lines => '5', + Distribution => 'usa', + 'Message-ID' => '<6.20232.842369548@example.com>', + __BODY__ => 'Send five dollars to the ISC, c/o ...', + __LINES__ => 5 + ); + + Note that the value of $hdr{Lines} is the contents of the Lines: header + of the article and may bear no resemblence to the actual length of the + article. $hdr{__LINES__} is the line count calculated by INN, and is + guaranteed to be accurate. + + The %hdr hash should not be modified inside filter_art(). Instead, if + any of the contents need to be modified temporarily during filtering + (smashing case, for example), copy them into a seperate variable first + and perform the modifications on the copy. Currently, $hdr{__BODY__} is + the only data that will cause your filter to die if you modify it, but + in the future other keys may also contain live data. Modifying live INN + data in Perl will hopefully only cause a fatal exception in your Perl + code that disables Perl filtering until you fix it, but it's possible + for it to cause article munging or even core dumps in INN. So always, + always make a copy first. + + As mentioned above, if filter_art() returns the empty string (''), the + article is accepted. Note that this must be the empty string, not 0 or + undef. Otherwise, the article is rejected, and whatever scalar + filter_art() returns (typically a string) will be taken as the reason + why the article was rejected. This reason will be returned to the + remote peer as well as logged to the news logs. (innreport, in its + nightly report, will summarize the number of articles rejected by the + Perl filter and include a count of how many articles were rejected with + each reason string.) + + One other type of filtering is also supported. If Perl filtering is + turned on and the Perl function filter_messageid() is defined, that + function will be called for each message ID received from a peer (via + either CHECK or IHAVE). The function receives a single argument, the + message ID, and like filter_art() should return an empty string to + accept the article or an error string to refuse the article. This + function is called before any history lookups and for every article + offered to innd with CHECK or IHAVE (before the actual article is sent). + Accordingly, the message ID is the only information it has about the + article (the %hdr hash will be empty). This code would sit in a + performance-critical hot path in a typical server, and therefore should + be as fast as possible, but it can do things like refuse articles from + certain hosts or cancels for already rejected articles (if they follow + the $alz convention) without having to take the network bandwidth hit of + accepting the entire article first. + + Note that you cannot rely on filter_messageid() being called for every + incoming article; articles sent via TAKETHIS without an earlier CHECK + will never pass through filter_messageid() and will only go through + filter_art(). + + Finally, whenever ctlinnd throttle, ctlinnd pause, or ctlinnd go is run, + the Perl function filter_mode() is called if it exists. It receives no + arguments and returns no value, but it has access to a global hash %mode + that contains three values: + + Mode The current server mode (throttled, paused, or running) + NewMode The new mode the server is going to + reason The reason that was given to ctlinnd + + One possible use for this function is to save filter state across a + restart of innd. There isn't any Perl function which is called when INN + shuts down, but using filter_mode() the Perl filter can dump it's state + to disk whenever INN is throttled. Then, if the news administrator + follows the strongly recommended shutdown procedure of throttling the + server before shutting it down, the filter state will be safely saved to + disk and can be reloaded when innd restarts (possibly by + startup_innd.pl). + + The state of the Perl interpretor in which all of these Perl functions + run is preserved over the lifetime of innd. In other words, it's + permissible for the Perl code to create its own global Perl variables, + data structures, saved state, and the like, and all of that will be + available to filter_art() and filter_messageid() each time they're + called. The only variable INN fiddles with (or pays any attention to at + all) is %hdr, which is cleared after each call to filter_art(). + + Perl filtering can be turned off with "ctlinnd perl n" and back on again + with "ctlinnd perl y". Perl filtering is turned off automatically if + loading of the filter fails or if the filter code returns any sort of a + fatal error (either due to Perl itself or due to a "die" in the Perl + code). + +Supported innd Callbacks + + innd makes seven functions available to any of its embedded Perl code. + Those are: + + INN::addhist(*messageid*, *arrival*, *articledate*, *expire*, *paths*) + Adds *messageid* to the history database. All of the arguments + except the first one are optional; the times default to the current + time and the paths field defaults to the empty string. (For those + unfamiliar with the fields of a history(5) database entry, the + *arrival* is normally the time at which the server accepts the + article, the *articledate* is from the Date header of the article, + the *expire* is from the Expires header of the article, and the + *paths* field is the storage API token. All three times as measured + as a time_t since the epoch.) Returns true on success, false + otherwise. + + INN::article(*messageid*) + Returns the full article (as a simple string) identified by + *messageid*, or undef if it isn't found. Each line will end with a + simple \n, but leading periods may still be doubled if the article + is stored in wire format. + + INN::cancel(*messageid*) + Cancels *messageid*. (This is equivalent to "ctlinnd cancel"; it + cancels the message on the local server, but doesn't post a cancel + message or do anything else that affects anything other than the + local server.) Returns true on success, false otherwise. + + INN::filesfor(*messageid*) + Returns the *paths* field of the history entry for the given + *messageid*. This will be the storage API token for the message. + If *messageid* isn't found in the history database, returns undef. + + INN::havehist(*messageid*) + Looks up *messageid* in the history database and returns true if + it's found, false otherwise. + + INN::head(*messageid*) + Returns the header (as a simple string) of the article identified by + *messageid*, or undef if it isn't found. Each line will end with a + simple \n (in other words, regardless of the format of article + storage, the returned string won't be in wire format). + + INN::newsgroup(*newsgroup*) + Returns the status of *newsgroup* (the last field of the active file + entry for that newsgroup). See active(5) for a description of the + possible values and their meanings (the most common are "y" for an + unmoderated group and "m" for a moderated group). If *newsgroup* + isn't in the active file, returns undef. + + These functions can only be used from inside the innd Perl filter; + they're not available in the nnrpd filter. + +Common Callbacks + + The following additional function is available from inside filters + embedded in innd, and is also available from filters embedded in nnrpd + (see below): + + INN::syslog(level, message) + Logs a message via syslog(2). This is quite a bit more reliable and + portable than trying to use Sys::Syslog from inside the Perl filter. + Only the first character of the level argument matters; the valid + letters are the first letters of ALERT, CRIT, ERR, WARNING, NOTICE, + INFO, and DEBUG (case-insensitive) and specify the priority at which + the message is logged. If a level that doesn't match any of those + levels is given, the default priority level is LOG_NOTICE. The + second argument is the message to log; it will be prefixed by + "filter: " and logged to syslog with facility LOG_NEWS. + +The nnrpd Posting Filter + + Whenever Perl support is needed in nnrpd, it first loads the file + _PATH_PERL_FILTER_NNRPD (defined in include/paths.h, by default + filter_nnrpd.pl). This file must be located in the directory specified + by pathfilter in inn.conf (/usr/local/news/bin/filter by default). The + default directory for filter code can be specified at configure time by + giving the flag --with-filter-dir to configure. + + If filter_nnrpd.pl loads successfully and defines the Perl function + filter_post(), Perl filtering is turned on. Otherwise, it's turned off. + If filter_post() ever returns a fatal error (either from Perl or from a + "die" in the Perl code), Perl filtering is turned off for the life of + that nnrpd process and any further posts made during that session won't + go through the filter. + + While Perl filtering is on, every article received by nnrpd via the POST + command is passed to the filter_post() Perl function before it is passed + to INN (or mailed to the moderator of a moderated newsgroup). If + filter_post() returns an empty string (''), the article is accepted and + normal processing of it continues. Otherwise, the article is rejected + and the string returned by filter_post() is returned to the client as + the error message (with some exceptions; see below). + + filter_post() has access to a global hash %hdr, which contains all of + the headers of the article. (Unlike the innd Perl filter, %hdr for the + nnrpd Perl filter contains *all* of the headers, not just the standard + ones. If any of the headers are duplicated, though, %hdr will contain + only the value of the last occurance of the header. nnrpd will reject + the article before the filter runs if any of the standard headers are + duplicated.) It also has access to the full body of the article in the + variable $body, and if the poster authenticated via AUTHINFO (or if + either Perl authentication or a readers.conf authentication method is + used and produces user information), it has access to the authenticated + username of the poster in the variable $user. + + Unlike the innd Perl filter, the nnrpd Perl filter can modify the %hdr + hash. In fact, if the Perl variable $modify_headers is set to true + after filter_post() returns, the contents of the %hdr hash will be + written back to the article replacing the original headers. + filter_post() can therefore make any modifications it wishes to the + headers and those modifications will be reflected in the article as it's + finally posted. The article body cannot be modified in this way; any + changes to $body will just be ignored. + + Be careful when using the ability to modify headers. filter_post() runs + after all the normal consistency checks on the headers and after server + supplied headers (like Message-ID: and Date:) are filled in. Deleting + required headers or modifying headers that need to follow a strict + format can result in nnrpd trying to post nonsense articles (which will + probably then be rejected by innd). If $modify_headers is set, + *everything* in the %hdr hash is taken to be article headers and added + to the article. + + If filter_post() returns something other than the empty string, this + message is normally returned to the client as an error. There are two + exceptions: If the string returned begins with "DROP", the post will be + silently discarded and success returned to the client. If the string + begins with "SPOOL", success is returned to the client, but the post is + saved in a directory named "spam" under the directory specified by + pathincoming in inn.conf (in a directory named "spam/mod" if the post is + to a moderated group). This is intended to allow manual inspection of + the suspect messages; if they should be posted, they can be manually + moved out of the subdirectory to the directory specified by pathincoming + in inn.conf, where they can be posted by running "rnews -U". If you use + this functionality, make sure those directories exist. + +Changes to Perl Authentication Support for nnrpd + + The old authentication functionality has been combined with the new + readers.conf mechanism by Erik Klavon ; bug reports + should however go to inn-bugs@isc.org, not Erik. + + The remainder of this section is an introduction to the new mechanism + (which uses the perl_auth: and perl_access: readers.conf parameters) + with porting/migration suggestions for people familiar with the old + mechanism (identifiable by the nnrpperlauth: parameter in inn.conf). + + Other people should skip this section. + + The perl_auth parameter allows the use of Perl to authenticate a user. + Scripts (like those from the old mechanism) are listed in readers.conf + using perl_auth in the same manner other authenticators are using auth: + + perl_auth: "/path/to/script/auth1.pl" + + The file given as argument to perl_auth should contain the same + procedures as before. The global hash %attributes remains the same, + except for the removal of the "type" entry which is no longer needed in + this modification and the addition of several new entries (port, + intipaddr, intport) described below. The return array now only contains + either two or three elements, the first of which is the NNTP return + code. The second is an error string which is passed to the client if the + error code indicates that the authentication attempt has failed. This + allows a specific error message to be generated by the perl script in + place of "Authentication failed". An optional third return element if + present will be used to match the connection with the users: parameter + in access groups and will also be the username logged. If this element + is absent, the username supplied by the client during authentication + will be used as was the previous behavior. + + The perl_access parameter (described below) is also new; it allows the + dynamic generation of an access group for an incoming connection using a + Perl script. If a connection matches an auth group which has a + perl_access parameter, all access groups in readers.conf are ignored; + instead the procedure described below is used to generate an access + group. This concept is due to Jeffrey M. Vinocur. + + The new functionality should provide all of the existing capabilities of + the Perl hook, in combination with the flexibility of readers.conf and + the use of other authentication and resolving programs. To use Perl + authentication code that predates the readers.conf mechanism, you would + need to modify the code slightly (see below for the new specification) + and supply a simple readers.conf file. If you don't want to modify your + code, the samples directory has nnrpd_auth_wrapper.pl and + nnrpd_access_wrapper.pl which should allow you to use your old code + without needing to change it. + + However, before trying to use your old Perl code, you may want to + consider replacing it entirely with non-Perl authentication. (With + readers.conf and the regular authenticator and resolver programs, much + of what once required Perl can be done directly.) Even if the + functionality is not available directly, you may wish to write a new + authenticator or resolver (which can be done in whatever language you + prefer to work in). + +Perl Authentication Support for nnrpd + + Support for authentication via Perl is provided in nnrpd by the + inclusion of a perl_auth: parameter in a readers.conf auth group. + perl_auth: works exactly like the auth: parameter in readers.conf, + except that it calls the script given as argument using the Perl hook + rather then treating it as an external program. + + If the processing of readers.conf requires that a perl_auth: statement + be used for authentication, Perl is loaded (if it has yet to be) and the + file given as argument to the perl_auth: parameter is loaded as well. If + a Perl function auth_init() is defined by that file, it is called + immediately after the file is loaded. It takes no arguments and returns + nothing. + + Provided the file loads without errors, auth_init() (if present) runs + without fatal errors, and a Perl function authenticate() is defined, + authenticate() will then be called. authenticate() takes no arguments, + but it has access to a global hash %attributes which contains + information about the connection as follows: $attributes{hostname} will + contain the hostname (or the IP address if it doesn't resolve) of the + client machine, $attributes{ipaddress} will contain its IP address (as a + string), $attributes{port} will contain the client port (as an integer), + $attributes{interface} contains the hostname of the interface the client + connected on, $attributes{intipaddr} contains the IP address (as a + string) of the interface the client connected on, $attributes{intport} + contains the port (as an integer) on the interface the client connected + on, $attributes{username} will contain the provided username and + $attributes{password} the password. + + authenticate() should return a two or three element array. The first + element is the NNTP response code to return to the client, the second + element is an error string which is passed to the client if the response + code indicates that the authentication attempt has failed. An optional + third return element if present will be used to match the connection + with the users: parameter in access groups and will also be the username + logged. If this element is absent, the username supplied by the client + during authentication will be used for matching and logging. + + The NNTP response code should probably be either 281 (authentication + successful) or 502 (authentication unsuccessful). If the code returned + is anything other than 281, nnrpd will print an authentication error + message and drop the connection and exit. + + If authenticate() dies (either due to a Perl error or due to calling + die), or if it returns anything other than the two or three element + array described above, an internal error will be reported to the client, + the exact error will be logged to syslog, and nnrpd will drop the + connection and exit. + +Dynamic Generation of Access Groups + + A Perl script may be used to dynamically generate an access group which + is then used to determine the access rights of the client. This occurs + whenever the perl_access: is specified in an auth group which has + successfully matched the client. Only one perl_access: statement is + allowed in an auth group. This parameter should not be mixed with a + python_access: statement in the same auth group. + + When a perl_access: parameter is encountered, Perl is loaded (if it has + yet to be) and the file given as argument is loaded as well. Provided + the file loads without errors, and a Perl function access() is defined, + access() will then be called. access() takes no arguments, but it has + access to a global hash %attributes which contains information about the + connection as follows: $attributes{hostname} will contain the hostname + (or the IP address if it doesn't resolve) of the client machine, + $attributes{ipaddress} will contain its IP address (as a string), + $attributes{port} will contain the client port (as an integer), + $attributes{interface} contains the hostname of the interface the client + connected on, $attributes{intipaddr} contains the IP address (as a + string) of the interface the client connected on, $attributes{intport} + contains the port (as an integer) on the interface the client connected + on, $attributes{username} will contain the provided username and domain + (in username@domain form). + + access() returns a hash, containing the desired access parameters and + values. Here is an untested example showing how to dynamically generate + a list of newsgroups based on the client's username and domain. + + my %hosts = ( "example.com" => "example.*", "isc.org" => "isc.*" ); + + sub access { + %return_hash = ( + "max_rate" => "10000", + "addnntppostinghost" => "true", + # ... + ); + if( defined $attributes{username} && + $attributes{username} =~ /.*@(.*)/ ) + { + $return_hash{"virtualhost"} = "true"; + $return_hash{"path"} = $1; + $return_hash{"newsgroups"} = $hosts{$1}; + } else { + $return_hash{"read"} = "*"; + $return_hash{"post"} = "local.*" + } + return %return_hash; + } + + Note that both the keys and values are quoted strings. These values are + to be returned to a C program and must be quoted strings. For values + containing one or more spaces, it is not necessary to include extra + quotes inside the string. + + While you may include the users: parameter in a dynamically generated + access group, some care should be taken (unless your pattern is just * + which is equivalent to leaving the parameter out). The group created + with the values returned from the Perl script is the only one considered + when nnrpd attempts to find an access group matching the connection. If + a users: parameter is included and it doesn't match the connection, then + the client will be denied access since there are no other access groups + which could match the connection. + + If access() dies (either due to a Perl error or due to calling die), or + if it returns anything other than a hash as described above, an internal + error will be reported to the client, the exact error will be logged to + syslog, and nnrpd will drop the connection and exit. + +Notes on Writing Embedded Perl + + All Perl evaluation is done inside an implicit eval block, so calling + die in Perl code will not kill the innd or nnrpd process. Neither will + Perl errors (such as syntax errors). However, such errors will have + negative effects (fatal errors in the innd or nnrpd filter will cause + filtering to be disabled, and fatal errors in the nnrpd authentication + code will cause the client connection to be terminated). + + Calling exit directly, however, *will* kill the innd or nnrpd process, + so don't do that. Similarly, you probably don't want to call fork (or + any other function that results in a fork such as system, + IPC::Open3::open3(), or any use of backticks) since there are possibly + unflushed buffers that could get flushed twice, lots of open state that + may not get closed properly, and innumerable other potential problems. + In general, be aware that all Perl code is running inside a large and + complicated C program, and Perl code that impacts the process as a whole + is best avoided. + + You can use print and warn inside Perl code to send output to STDOUT or + STDERR, but you probably shouldn't. Instead, open a log file and print + to it instead (or, in the innd filter, use INN::syslog() to write + messages via syslog like the rest of INN). If you write to STDOUT or + STDERR, where that data will go depends on where the filter is running; + inside innd, it will go to the news log or the errlog, and inside nnrpd + it will probably go nowhere but could go to the client. The nnrpd + filter takes some steps to try to keep output from going across the + network connection to the client (which would probably result in a very + confused client), but best not to take the chance. + + For similar reasons, try to make your Perl code -w clean, since Perl + warnings are written to STDERR. (INN won't run your code under -w, but + better safe than sorry, and some versions of Perl have some mandatory + warnings you can't turn off.) + + You *can* use modules in your Perl code, just like you would in an + ordinary Perl script. You can even use modules that dynamically load C + code. Just make sure that none of the modules you use go off behind + your back to do any of the things above that are best avoided. + + Whenever you make any modifications to the Perl code, and particularly + before starting INN or reloading filter.perl with new code, you should + run perl -wc on the file. This will at least make sure you don't have + any glaring syntax errors. Remember, if there are errors in your code, + filtering will be disabled, which could mean that posts you really + wanted to reject will leak through and authentication of readers may be + totally broken. + + The samples directory has example startup_innd.pl, filter_innd.pl, + filter_nnrpd.pl, and nnrpd_auth.pl files that contain some simplistic + examples. Look them over as a starting point when writing your own. + +Available Packages + + This is an unofficial list of known filtering packages at the time of + publication. This is not an endorsement of these filters by the ISC or + the INN developers, but is included as assistance in locating packages + which make use of this filter mechanism. + + CleanFeed Jeremy Nixon + + A spam filter catching excessive multi-posting and a host of + other things. Uses filter_innd.pl exclusively, requires the MD5 + Perl module. Probably the most popular and widely-used Perl + filter around. + + Usenet II Filter Edward S. Marshall + + Checks for "soundness" according to Usenet II guidelines in the + net.* hierarchy. Designed to use filter_nnrpd.pl. + + News Gizmo Aidan Cully + + A posting filter for helping a site enforce Usenet-II soundness, + and for quotaing the number of messages any user can post to + Usenet daily. diff --git a/doc/hook-python b/doc/hook-python new file mode 100644 index 0000000..f6ef5c0 --- /dev/null +++ b/doc/hook-python @@ -0,0 +1,614 @@ +INN Python Filtering and Authentication Support + + This file documents INN's built-in optional support for Python article + filtering. It is patterned after the Perl and (now obsolete) TCL hooks + previously added by Bob Heiney and Christophe Wolfhugel. + + For this filter to work successfully, you will need to have at least + Python 1.5.2 installed. You can obtain it from + . + + The innd Python interface and the original Python filtering + documentation were written by Greg Andruk (nee Fluffy) + . The Python authentication and authorization support + for nnrpd as well as the original documentation for it were written by + Ilya Etingof in December 1999. + +Installation + + Once you have built and installed Python, you can cause INN to use it by + adding the --with-python switch to your "configure" command. You will + need to have all the headers and libraries required for embedding Python + into INN; they can be found in Python development packages, which + include header files and static libraries. + + You will then be able to use Python authentication, dynamic access group + generation and dynamic access control support in nnrpd along with + filtering support in innd. + + See the ctlinnd(8) manual page to learn how to enable, disable and + reload Python filters on a running server (especially "ctlinnd mode", + "ctlinnd python y|n" and "ctlinnd reload filter.python 'reason'"). + + Also, see the filter_innd.py, nnrpd_auth.py, nnrpd_access.py and + nnrpd_dynamic.py samples in your filters directory for a demonstration + of how to get all this working. + +Writing an innd Filter + + Introduction + + You need to create a filter_innd.py module in INN's filter directory + (see the *pathfilter* setting in inn.conf). A heavily-commented sample + is provided; you can use it as a template for your own filter. There is + also an INN.py module there which is not actually used by INN; it is + there so you can test your module interactively. + + First, define a class containing the methods you want to provide to + innd. Methods innd will use if present are: + + __init__(*self*) + Not explicitly called by innd, but will run whenever the filter + module is (re)loaded. This is a good place to initialize constants + or pick up where "filter_before_reload" or "filter_close" left off. + + filter_before_reload(*self*) + This will execute any time a "ctlinnd reload all 'reason'" or + "ctlinnd reload filter.python 'reason'" command is issued. You can + use it to save statistics or reports for use after reloading. + + filter_close(*self*) + This will run when a "ctlinnd shutdown 'reason'" command is + received. + + filter_art(*self*, *art*) + *art* is a dictionary containing an article's headers and body. + This method is called every time innd receives an article. The + following can be defined: + + Also-Control, Approved, Bytes, Cancel-Key, Cancel-Lock, + Content-Base, Content-Disposition, Content-Transfer-Encoding, + Content-Type, Control, Date, Date-Received, Distribution, Expires, + Face, Followup-To, From, In-Reply-To, Injection-Date, Injection-Info, + Keywords, Lines, List-ID, Message-ID, MIME-Version, Newsgroups, + NNTP-Posting-Date, NNTP-Posting-Host, Organization, Originator, + Path, Posted, Posting-Version, Received, References, Relay-Version, + Reply-To, Sender, Subject, Supersedes, User-Agent, + X-Auth, X-Canceled-By, X-Cancelled-By, X-Complaints-To, X-Face, + X-HTTP-UserAgent, X-HTTP-Via, X-Mailer, X-Modbot, X-Modtrace, + X-Newsposter, X-Newsreader, X-No-Archive, X-Original-Message-ID, + X-Original-Trace, X-Originating-IP, X-PGP-Key, X-PGP-Sig, + X-Poster-Trace, X-Postfilter, X-Proxy-User, X-Submissions-To, + X-Trace, X-Usenet-Provider, Xref, __BODY__, __LINES__. + + Note that all the above values are as they arrived, not modified by + your INN (especially, the Xref: header, if present, is the one of + the remote site which sent you the article, and not yours). + + These values will be buffer objects holding the contents of the same + named article headers, except for the special "__BODY__" and + "__LINES__" items. Items not present in the article will contain + "None". + + "art('__BODY__')" is a buffer object containing the article's entire + body, and "art('__LINES__')" is an int holding innd's reckoning of + the number of lines in the article. All the other elements will be + buffers with the contents of the same-named article headers. + + The Newsgroups: header of the article is accessible inside the + Python filter as "art['Newsgroups']". + + If you want to accept an article, return "None" or an empty string. + To reject, return a non-empty string. The rejection strings will be + shown to local clients and your peers, so keep that in mind when + phrasing your rejection responses. + + filter_messageid(*self*, *msgid*) + *msgid* is a buffer object containing the ID of an article being + offered by IHAVE or CHECK. Like with "filter_art", the message will + be refused if you return a non-empty string. If you use this + feature, keep it light because it is called at a rather busy place + in innd's main loop. Also, do not rely on this function alone to + reject by ID; you should repeat the tests in "filter_art" to catch + articles sent with TAKETHIS but no CHECK. + + filter_mode(*self*, *oldmode*, *newmode*, *reason*) + When the operator issues a ctlinnd "pause", "throttle", "go", + "shutdown" or "xexec" command, this function can be used to do + something sensible in accordance with the state change. Stamp a log + file, save your state on throttle, etc. *oldmode* and *newmode* + will be strings containing one of the values in ("running", + "throttled", "paused", "shutdown", "unknown"). *oldmode* is the + state innd was in before ctlinnd was run, *newmode* is the state + innd will be in after the command finishes. *reason* is the comment + string provided on the ctlinnd command line. + + How to Use these Methods with innd + + To register your methods with innd, you need to create an instance of + your class, import the built-in INN module, and pass the instance to + "INN.set_filter_hook". For example: + + class Filter: + def filter_art(self, art): + ... + blah blah + ... + + def filter_messageid(self, id): + ... + yadda yadda + ... + + import INN + myfilter = Filter() + INN.set_filter_hook(myfilter) + + When writing and testing your Python filter, don't be afraid to make use + of "try:"/"except:" and the provided "INN.syslog" function. stdout and + stderr will be disabled, so your filter will die silently otherwise. + + Also, remember to try importing your module interactively before loading + it, to ensure there are no obvious errors. One typo can ruin your whole + filter. A dummy INN.py module is provided to facilitate testing outside + the server. To test, change into your filter directory and use a + command like: + + python -ic 'import INN, filter_innd' + + You can define as many or few of the methods listed above as you want in + your filter class (it is fine to define more methods for your own use; + innd will not be using them but your filter can). If you *do* define + the above methods, GET THE PARAMETER COUNTS RIGHT. There are checks in + innd to see whether the methods exist and are callable, but if you + define one and get the parameter counts wrong, innd WILL DIE. You have + been warned. Be careful with your return values, too. The "filter_art" + and "filter_messageid" methods have to return strings, or "None". If + you return something like an int, innd will *not* be happy. + + A Note regarding Buffer Objects + + Buffer objects are cousins of strings, new in Python 1.5.2. Using + buffer objects may take some getting used to, but we can create buffers + much faster and with less memory than strings. + + For most of the operations you will perform in filters (like + "re.search", "string.find", "md5.digest") you can treat buffers just + like strings, but there are a few important differences you should know + about: + + # Make a string and two buffers. + s = "abc" + b = buffer("def") + bs = buffer("abc") + + s == bs # - This is false because the types differ... + buffer(s) == bs # - ...but this is true, the types now agree. + s == str(bs) # - This is also true, but buffer() is faster. + s[:2] == bs[:2] # - True. Buffer slices are strings. + + # While most string methods will take either a buffer or a string, + # string.join (in the string module) insists on using only strings. + import string + string.join([str(b), s], '.') # Returns 'def.abc'. + '.'.join([str(b), s]) # Returns 'def.abc' too. + '.'.join([b, s]) # This raises a TypeError. + + e = s + b # This raises a TypeError, but... + + # ...these two both return the string 'abcdef'. The first one + # is faster -- choose buffer() over str() whenever you can. + e = buffer(s) + b + f = s + str(b) + + g = b + '>' # This is legal, returns the string 'def>'. + + Functions Supplied by the Built-in innd Module + + Besides "INN.set_filter_hook" which is used to register your methods + with innd as it has already been explained above, the following + functions are available from Python scripts: + + addhist(*message-id*) + article(*message-id*) + cancel(*message-id*) + havehist(*message-id*) + hashstring(*string*) + head(*message-id*) + newsgroup(*groupname*) + syslog(*level*, *message*) + + Therefore, not only can innd use Python, but your filter can use some of + innd's features too. Here is some sample Python code to show what you + get with the previously listed functions. + + import INN + + # Python's native syslog module isn't compiled in by default, + # so the INN module provides a replacement. The first parameter + # tells the Unix syslogger what severity to use; you can + # abbreviate down to one letter and it's case insensitive. + # Available levels are (in increasing levels of seriousness) + # Debug, Info, Notice, Warning, Err, Crit, and Alert. (If you + # provide any other string, it will be defaulted to Notice.) The + # second parameter is the message text. The syslog entries will + # go to the same log files innd itself uses, with a 'python:' + # prefix. + syslog('warning', 'I will not buy this record. It is scratched.') + animals = 'eels' + vehicle = 'hovercraft' + syslog('N', 'My %s is full of %s.' % (vehicle, animals)) + + # Let's cancel an article! This only deletes the message on the + # local server; it doesn't send out a control message or anything + # scary like that. Returns 1 if successful, else 0. + if INN.cancel(''): + cancelled = "yup" + else: + cancelled = "nope" + + # Check if a given message is in history. This doesn't + # necessarily mean the article is on your spool; cancelled and + # expired articles hang around in history for a while, and + # rejected articles will be in there if you have enabled + # remembertrash in inn.conf. Returns 1 if found, else 0. + if INN.havehist(''): + comment = "*yawn* I've already seen this article." + else: + comment = 'Mmm, fresh news.' + + # Here we are running a local spam filter, so why eat all those + # cancels? We can add fake entries to history so they'll get + # refused. Returns 1 on success, 0 on failure. + cancelled_id = buffer('') + if INN.addhist("') + artheader = INN.head('') + + # As we can compute a hash digest for a string, we can obtain one + # for artbody. It might be of help to detect spam. + digest = INN.hashstring(artbody) + + # Finally, do you want to see if a given newsgroup is moderated or + # whatever? INN.newsgroup returns the last field of a group's + # entry in active as a string. + froupflag = INN.newsgroup('alt.fan.karl-malden.nose') + if froupflag == '': + moderated = 'no such newsgroup' + elif froupflag == 'y': + moderated = "nope" + elif froupflag == 'm': + moderated = "yep" + else: + moderated = "something else" + +Writing an nnrpd Filter + + Changes to Python Authentication and Access Control Support for nnrpd + + The old authentication and access control functionality has been + combined with the new readers.conf mechanism by Erik Klavon + ; bug reports should however go to , + not Erik. + + The remainder of this section is an introduction to the new mechanism + (which uses the *python_auth*, *python_access*, and *python_dynamic* + readers.conf parameters) with porting/migration suggestions for people + familiar with the old mechanism (identifiable by the now deprecated + *nnrpperlauth* parameter in inn.conf). + + Other people should skip this section. + + The *python_auth* parameter allows the use of Python to authenticate a + user. Authentication scripts (like those from the old mechanism) are + listed in readers.conf using *python_auth* in the same manner other + authenticators are using *auth*: + + python_auth: "nnrpd_auth" + + It uses the script named nnrpd_auth.py (note that ".py" is not present + in the *python_auth* value). + + Scripts should be placed as before in the filter directory (see the + *pathfilter* setting in inn.conf). The new hook method "authen_init" + takes no arguments and its return value is ignored; its purpose is to + provide a means for authentication specific initialization. The hook + method "authen_close" is the more specific analogue to the old "close" + method. These two method hooks are not required, contrary to + "authenticate", the main method. + + The argument dictionary passed to "authenticate" remains the same, + except for the removal of the *type* entry which is no longer needed in + this modification and the addition of several new entries (*port*, + *intipaddr*, *intport*) described below. The return tuple now only + contains either two or three elements, the first of which is the NNTP + response code. The second is an error string which is passed to the + client if the response code indicates that the authentication attempt + has failed. This allows a specific error message to be generated by the + Python script in place of the generic message "Authentication failed". + An optional third return element, if present, will be used to match the + connection with the *user* parameter in access groups and will also be + the username logged. If this element is absent, the username supplied + by the client during authentication will be used, as was the previous + behaviour. + + The *python_access* parameter (described below) is new; it allows the + dynamic generation of an access group of an incoming connection using a + Python script. If a connection matches an auth group which has a + *python_access* parameter, all access groups in readers.conf are + ignored; instead the procedure described below is used to generate an + access group. This concept is due to Jeffrey M. Vinocur and you can add + this line to readers.conf in order to use the nnrpd_access.py Python + script in *pathfilter*: + + python_access: "nnrpd_access" + + In the old implementation, the authorization method allowed for access + control on a per-group basis. That functionality is preserved in the + new implementation by the inclusion of the *python_dynamic* parameter in + readers.conf. The only change is the corresponding method name of + "dynamic" as opposed to "authorize". Additionally, the associated + optional housekeeping methods "dynamic_init" and "dynamic_close" may be + implemented if needed. In order to use nnrpd_dynamic.py in + *pathfilter*, you can add this line to readers.conf: + + python_dynamic: "nnrpd_dynamic" + + This new implementation should provide all of the previous capabilities + of the Python hooks, in combination with the flexibility of readers.conf + and the use of other authentication and resolving programs (including + the Perl hooks!). To use Python code that predates the new mechanism, + you would need to modify the code slightly (see below for the new + specification) and supply a simple readers.conf file. If you do not + want to modify your code, the sample directory has + nnrpd_auth_wrapper.py, nnrpd_access_wrapper.py and + nnrpd_dynamic_wrapper.py which should allow you to use your old code + without needing to change it. + + However, before trying to use your old Python code, you may want to + consider replacing it entirely with non-Python authentication. (With + readers.conf and the regular authenticator and resolver programs, much + of what once required Python can be done directly.) Even if the + functionality is not available directly, you may wish to write a new + authenticator or resolver (which can be done in whatever language you + prefer). + + Python Authentication Support for nnrpd + + Support for authentication via Python is provided in nnrpd by the + inclusion of a *python_auth* parameter in a readers.conf auth group. + *python_auth* works exactly like the *auth* parameter in readers.conf, + except that it calls the script given as argument using the Python hook + rather then treating it as an external program. Multiple, mixed use of + *python_auth* with other *auth* statements including *perl_auth* is + permitted. Each *auth* statement will be tried in the order they appear + in the auth group until either one succeeds or all are exhausted. + + If the processing of readers.conf requires that a *python_auth* + statement be used for authentication, Python is loaded (if it has yet to + be) and the file given as argument to the *python_auth* parameter is + loaded as well (do not include the ".py" extension of this file in the + value of *python_auth*). If a Python object with a method "authen_init" + is hooked in during the loading of that file, then that method is called + immediately after the file is loaded. If no errors have occurred, the + method "authenticate" is called. Depending on the NNTP response code + returned by "authenticate", the authentication hook either succeeds or + fails, after which the processing of the auth group continues as usual. + When the connection with the client is closed, the method "authen_close" + is called if it exists. + + Dynamic Generation of Access Groups + + A Python script may be used to dynamically generate an access group + which is then used to determine the access rights of the client. This + occurs whenever the *python_access* parameter is specified in an auth + group which has successfully matched the client. Only one + *python_access* statement is allowed in an auth group. This parameter + should not be mixed with a *perl_access* statement in the same auth + group. + + When a *python_access* parameter is encountered, Python is loaded (if it + has yet to be) and the file given as argument is loaded as well (do not + include the ".py" extension of this file in the value of + *python_access*). If a Python object with a method "access_init" is + hooked in during the loading of that file, then that method is called + immediately after the file is loaded. If no errors have occurred, the + method "access" is called. The dictionary returned by "access" is used + to generate an access group that is then used to determine the access + rights of the client. When the connection with the client is closed, + the method "access_close" is called, if it exists. + + While you may include the *users* parameter in a dynamically generated + access group, some care should be taken (unless your pattern is just "*" + which is equivalent to leaving the parameter out). The group created + with the values returned from the Python script is the only one + considered when nnrpd attempts to find an access group matching the + connection. If a *users* parameter is included and it does not match + the connection, then the client will be denied access since there are no + other access groups which could match the connection. + + Dynamic Access Control + + If you need to have access control rules applied immediately without + having to restart all the nnrpd processes, you may apply access control + on a per newsgroup basis using the Python dynamic hooks (as opposed to + readers.conf, which does the same on per user basis). These hooks are + activated through the inclusion of the *python_dynamic* parameter in a + readers.conf auth group. Only one *python_dynamic* statement is allowed + in an auth group. + + When a *python_dynamic* parameter is encountered, Python is loaded (if + it has yet to be) and the file given as argument is loaded as well (do + not include the ".py" extension of this file in the value of + *python_dynamic*). If a Python object with a method "dynamic_init" is + hooked in during the loading of that file, then that method is called + immediately after the file is loaded. Every time a reader asks nnrpd to + read or post an article, the Python method "dynamic" is invoked before + proceeding with the requested operation. Based on the value returned by + "dynamic", the operation is either permitted or denied. When the + connection with the client is closed, the method "access_close" is + called if it exists. + + Writing a Python nnrpd Authentication Module + + You need to create a nnrpd_auth.py module in INN's filter directory (see + the *pathfilter* setting in inn.conf) where you should define a class + holding certain methods depending on which hooks you want to use. + + Note that you will have to use different Python scripts for + authentication and access: the values of *python_auth*, *python_access* + and *python_dynamic* have to be distinct for your scripts to work. + + The following methods are known to nnrpd: + + __init__(*self*) + Not explicitly called by nnrpd, but will run whenever the auth + module is loaded. Use this method to initialize any general + variables or open a common database connection. This method may be + omitted. + + authen_init(*self*) + Initialization function specific to authentication. This method may + be omitted. + + authenticate(*self*, *attributes*) + Called when a *python_auth* statement is reached in the processing + of readers.conf. Connection attributes are passed in the + *attributes* dictionary. Returns a response code, an error string, + and an optional string to be used in place of the client-supplied + username (both for logging and for matching the connection with an + access group). + + authen_close(*self*) + This method is invoked on nnrpd termination. You can use it to save + state information or close a database connection. This method may + be omitted. + + access_init(*self*) + Initialization function specific to generation of an access group. + This method may be omitted. + + access(*self*, *attributes*) + Called when a *python_access* statement is reached in the processing + of readers.conf. Connection attributes are passed in the + *attributes* dictionary. Returns a dictionary of values + representing statements to be included in an access group. + + access_close(*self*) + This method is invoked on nnrpd termination. You can use it to save + state information or close a database connection. This method may + be omitted. + + dynamic_init(*self*) + Initialization function specific to dynamic access control. This + method may be omitted. + + dynamic(*self*, *attributes*) + Called when a client requests a newsgroup, an article or attempts to + post. Connection attributes are passed in the *attributes* + dictionary. Returns "None" to grant access, or a non-empty string + (which will be reported back to the client) otherwise. + + dynamic_close(*self*) + This method is invoked on nnrpd termination. You can use it to save + state information or close a database connection. This method may + be omitted. + + The *attributes* Dictionary + + The keys and associated values of the *attributes* dictionary are + described below. + + *type* + "read" or "post" values specify the authentication type; only valid + for the "dynamic" method. + + *hostname* + It is the resolved hostname (or IP address if resolution fails) of + the connected reader. + + *ipaddress* + The IP address of the connected reader. + + *port* + The port of the connected reader. + + *interface* + The hostname of the local endpoint of the NNTP connection. + + *intipaddr* + The IP address of the local endpoint of the NNTP connection. + + *intport* + The port of the local endpoint of the NNTP connection. + + *user* + The username as passed with AUTHINFO command, or "None" if not + applicable. + + *pass* + The password as passed with AUTHINFO command, or "None" if not + applicable. + + *newsgroup* + The name of the newsgroup to which the reader requests read or post + access; only valid for the "dynamic" method. + + All the above values are buffer objects (see the notes above on what + buffer objects are). + + How to Use these Methods with nnrpd + + To register your methods with nnrpd, you need to create an instance of + your class, import the built-in nnrpd module, and pass the instance to + "nnrpd.set_auth_hook". For example: + + class AUTH: + def authen_init(self): + ... + blah blah + ... + + def authenticate(self, attributes): + ... + yadda yadda + ... + + import nnrpd + myauth = AUTH() + nnrpd.set_auth_hook(myauth) + + When writing and testing your Python filter, don't be afraid to make use + of "try:"/"except:" and the provided "nnrpd.syslog" function. stdout + and stderr will be disabled, so your filter will die silently otherwise. + + Also, remember to try importing your module interactively before loading + it, to ensure there are no obvious errors. One typo can ruin your whole + filter. A dummy nnrpd.py module is provided to facilitate testing + outside the server. It is not actually used by nnrpd but provides the + same set of functions as built-in nnrpd module. This stub module may be + used when debugging your own module. To test, change into your filter + directory and use a command like: + + python -ic 'import nnrpd, nnrpd_auth' + + Functions Supplied by the Built-in nnrpd Module + + Besides "nnrpd.set_auth_hook" used to pass a reference to the instance + of authentication and authorization class to nnrpd, the nnrpd built-in + module exports the following function: + + syslog(*level*, *message*) + It is intended to be a replacement for a Python native syslog. It + works like "INN.syslog", seen above. + + $Id: hook-python 7926 2008-06-29 08:27:41Z iulius $ + diff --git a/doc/hook-tcl b/doc/hook-tcl new file mode 100644 index 0000000..14c4f00 --- /dev/null +++ b/doc/hook-tcl @@ -0,0 +1,99 @@ +NOTE: The Tcl support described in this file is disabled. The code is +all still there, but you have to define DO_TCL manually while compiling to +enable it. Compiling in Tcl filtering was causing random innd segfaults +even if no Tcl filters were defined, so it's been turned off to prevent +confusion. + +The Tcl code will be removed in the next major release of INN since no one +appears to be using it and the code is unmaintained and has no champion. +If you want to resurrect it, it may be better to start from scratch, since +a lot has changed about INN since the filters were originally written and +the Perl and Python filters have far more capabilities. + + +Note, you need tcl 7.4. Rumour has it that 7.5 won't work. +--------------------------------------------------------------------------- +Subject: TCL-based Filtering for INN 1.5 +Date: Mon, 07 Feb 94 12:36:47 -0800 +From: Bob Heiney + + +Several times in the past few months, a site or two has started posting +the same article over and over again, but with a different message id. +Usually this is caused by broken software (e.g. mail <-> news gateways, +which many have written, but few have written correctly). +Occasionally, however, the reposting is intentional. A recent example +would be the "Global Alert: Jesus Is Coming" message which was posted +to over 2200 newsgroups (each copy with its own message id). + +I expect this to happen more often as the Internet continues its explosive +growth. Although my site (decwrl) usually has enough excess capacity to +weather these problems, many other sites cannot. One problem on +comp.sys.sgi.misc several months ago spewed 40MB of duplicate articles +before the offending sites were fixed, and this overflowed the spool at +many sites. Even for sites with lots of resources, there's still no need +to propagate erroneous or malicious duplicates. + +I wanted a way to protect my site that was highly specific, flexible, and +quick. + +Examination of duplicated articles showed that although the message ids +were different, it was usually easy for a news admin to come up with a +few rules based on the headers of the article that could be used to +differentiate the duplicates from other articles. (E.g. from +John.Doe@foo.com to comp.sys.sgi.misc with 'foobar' in the subject".) +I concluded that modifying innd to let me say "kill things that look +like _this_" would solve my problem. + +I also wanted to allow enough flexibilty in the design that I could +later work on automatic detection and elimination of excessive +duplicates (using a body checksum instead of headers). + +Since I needed a fairly powerful language to do all this, and since the +world doesn't need yet another special language, my solution was to add TCL +support to INN. I then modified "ARTpost" to call a TCL procedure which +could then accept or reject the article. The TCL code has access to an +associative array called "Headers", which contains all of the articles +headers. The TCL code may also call a 32-bit article-body checksum +procedure (this is to aid in future automatic detection of duplicates). + +Here's what a sample TCL filter procedure looks like: + +proc filter_news {} { + global o Headers + set sum [checksum_article] + puts $o "$Headers(Message-ID) $sum" + set newsgroups [split $Headers(Newsgroups) ,] + foreach i $newsgroups { + if {$i=="alt.test" && [string match "*heiney@pa.dec.com*" $Headers(From)]} { + return "dont like alt.test from heiney" + } + } + return "accept" +} + +The above TCL code does a few things. First it computes a 32-bit +checksum and writes it and the message ID to a file. It then rejects +articles from me to alt.test. + +The work I've done is totally integrated into the INN build and runtime +environments. For example, to turn filtering off, you'd just type + + ctlinnd filter n + +To reload the TCL code that does the filtering, you just say + + ctlinnd reload filter.tcl 'your comment here' + +(You may specify TCL callbacks to be executed right before and/or right +after reloading, in case your filter is doing fancy stuff.) See the +ctlinnd man page for more info. + +Filtering capability that's this powerful can be used for many +purposes, some benign and useful (excessive duplicate detections, +on-the-fly statistics), others abusive. I would ask that news admins +think carefully about any filtering they do. + +/Bob + + diff --git a/doc/man/Makefile b/doc/man/Makefile new file mode 100644 index 0000000..3e1a758 --- /dev/null +++ b/doc/man/Makefile @@ -0,0 +1,66 @@ +## $Id: Makefile 7458 2005-12-12 00:25:05Z eagle $ + +include ../../Makefile.global + +top = ../.. + +## Edit these if you need to. +MANFLAGS = -c $(OWNER) -m 0444 -B .OLD + +SEC1 = convdate.1 fastrm.1 getlist.1 grephistory.1 inews.1 innconfval.1 \ + innfeed.1 innmail.1 nntpget.1 pgpverify.1 pullnews.1 rnews.1 \ + shlock.1 shrinkfile.1 simpleftp.1 sm.1 startinnfeed.1 + +SEC3 = clientlib.3 dbz.3 inndcomm.3 libauth.3 libinn.3 libinnhist.3 \ + libstorage.3 list.3 parsedate.3 qio.3 tst.3 uwildmat.3 + +SEC5 = active.5 active.times.5 buffindexed.conf.5 control.ctl.5 \ + cycbuff.conf.5 distrib.pats.5 expire.ctl.5 history.5 incoming.conf.5 \ + inn.conf.5 innfeed.conf.5 innwatch.ctl.5 moderators.5 motd.news.5 \ + newsfeeds.5 nnrpd.track.5 newslog.5 nntpsend.ctl.5 ovdb.5 \ + overview.fmt.5 passwd.nntp.5 radius.conf.5 readers.conf.5 sasl.conf.5 \ + storage.conf.5 subscriptions.5 + +SEC8 = actsync.8 actsyncd.8 archive.8 auth_smb.8 batcher.8 buffchan.8 \ + ckpasswd.8 cnfsheadconf.8 cnfsstat.8 controlchan.8 ctlinnd.8 \ + cvtbatch.8 domain.8 expire.8 expireover.8 expirerm.8 filechan.8 \ + ident.8 inncheck.8 innd.8 inndf.8 inndstart.8 innreport.8 innstat.8 \ + innupgrade.8 innwatch.8 innxbatch.8 innxmit.8 mailpost.8 makedbz.8 \ + makehistory.8 mod-active.8 news.daily.8 news2mail.8 ninpaths.8 \ + nnrpd.8 nntpsend.8 ovdb_init.8 ovdb_monitor.8 ovdb_server.8 \ + ovdb_stat.8 overchan.8 perl-nocem.8 prunehistory.8 radius.8 \ + rc.news.8 scanlogs.8 send-nntp.8 send-uucp.8 sendinpaths.8 \ + tally.control.8 tdx-util.8 writelog.8 + +COPY = $(SHELL) ./putman.sh $(MANPAGESTYLE) "$(MANFLAGS)" + +all: +clobber clean distclean: +tags ctags: +profiled: + +install: install-man1 install-man3 install-man5 install-man8 + +install-man1: + for M in $(SEC1) ; do \ + $(COPY) $$M $D$(MAN1)/$$M ; \ + done + +install-man3: + for M in $(SEC3) ; do \ + $(COPY) $$M $D$(MAN3)/$$M ; \ + done + +install-man5: + for M in $(SEC5) ; do \ + $(COPY) $$M $D$(MAN5)/$$M ; \ + done + +# auth_krb5 is conditionally compiled, so handle it specially. +install-man8: + for M in $(SEC8) ; do \ + $(COPY) $$M $D$(MAN8)/$$M ; \ + done + if [ x"$(KRB5_AUTH)" != x ] ; then \ + $(COPY) auth_krb5.8 $D$(MAN8)/auth_krb5.8 ; \ + fi diff --git a/doc/man/active.5 b/doc/man/active.5 new file mode 100644 index 0000000..97a0d62 --- /dev/null +++ b/doc/man/active.5 @@ -0,0 +1,221 @@ +.\" Automatically generated by Pod::Man v1.37, Pod::Parser v1.32 +.\" +.\" Standard preamble: +.\" ======================================================================== +.de Sh \" Subsection heading +.br +.if t .Sp +.ne 5 +.PP +\fB\\$1\fR +.PP +.. +.de Sp \" Vertical space (when we can't use .PP) +.if t .sp .5v +.if n .sp +.. +.de Vb \" Begin verbatim text +.ft CW +.nf +.ne \\$1 +.. +.de Ve \" End verbatim text +.ft R +.fi +.. +.\" Set up some character translations and predefined strings. \*(-- will +.\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left +.\" double quote, and \*(R" will give a right double quote. \*(C+ will +.\" give a nicer C++. Capital omega is used to do unbreakable dashes and +.\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff, +.\" nothing in troff, for use with C<>. +.tr \(*W- +.ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p' +.ie n \{\ +. ds -- \(*W- +. ds PI pi +. if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch +. if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch +. ds L" "" +. ds R" "" +. ds C` "" +. ds C' "" +'br\} +.el\{\ +. ds -- \|\(em\| +. ds PI \(*p +. ds L" `` +. ds R" '' +'br\} +.\" +.\" If the F register is turned on, we'll generate index entries on stderr for +.\" titles (.TH), headers (.SH), subsections (.Sh), items (.Ip), and index +.\" entries marked with X<> in POD. Of course, you'll have to process the +.\" output yourself in some meaningful fashion. +.if \nF \{\ +. de IX +. tm Index:\\$1\t\\n%\t"\\$2" +.. +. nr % 0 +. rr F +.\} +.\" +.\" For nroff, turn off justification. Always turn off hyphenation; it makes +.\" way too many mistakes in technical documents. +.hy 0 +.if n .na +.\" +.\" Accent mark definitions (@(#)ms.acc 1.5 88/02/08 SMI; from UCB 4.2). +.\" Fear. Run. Save yourself. No user-serviceable parts. +. \" fudge factors for nroff and troff +.if n \{\ +. ds #H 0 +. ds #V .8m +. ds #F .3m +. ds #[ \f1 +. ds #] \fP +.\} +.if t \{\ +. ds #H ((1u-(\\\\n(.fu%2u))*.13m) +. ds #V .6m +. ds #F 0 +. ds #[ \& +. ds #] \& +.\} +. \" simple accents for nroff and troff +.if n \{\ +. ds ' \& +. ds ` \& +. ds ^ \& +. ds , \& +. ds ~ ~ +. ds / +.\} +.if t \{\ +. ds ' \\k:\h'-(\\n(.wu*8/10-\*(#H)'\'\h"|\\n:u" +. ds ` \\k:\h'-(\\n(.wu*8/10-\*(#H)'\`\h'|\\n:u' +. ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'^\h'|\\n:u' +. ds , \\k:\h'-(\\n(.wu*8/10)',\h'|\\n:u' +. ds ~ \\k:\h'-(\\n(.wu-\*(#H-.1m)'~\h'|\\n:u' +. ds / \\k:\h'-(\\n(.wu*8/10-\*(#H)'\z\(sl\h'|\\n:u' +.\} +. \" troff and (daisy-wheel) nroff accents +.ds : \\k:\h'-(\\n(.wu*8/10-\*(#H+.1m+\*(#F)'\v'-\*(#V'\z.\h'.2m+\*(#F'.\h'|\\n:u'\v'\*(#V' +.ds 8 \h'\*(#H'\(*b\h'-\*(#H' +.ds o \\k:\h'-(\\n(.wu+\w'\(de'u-\*(#H)/2u'\v'-.3n'\*(#[\z\(de\v'.3n'\h'|\\n:u'\*(#] +.ds d- \h'\*(#H'\(pd\h'-\w'~'u'\v'-.25m'\f2\(hy\fP\v'.25m'\h'-\*(#H' +.ds D- D\\k:\h'-\w'D'u'\v'-.11m'\z\(hy\v'.11m'\h'|\\n:u' +.ds th \*(#[\v'.3m'\s+1I\s-1\v'-.3m'\h'-(\w'I'u*2/3)'\s-1o\s+1\*(#] +.ds Th \*(#[\s+2I\s-2\h'-\w'I'u*3/5'\v'-.3m'o\v'.3m'\*(#] +.ds ae a\h'-(\w'a'u*4/10)'e +.ds Ae A\h'-(\w'A'u*4/10)'E +. \" corrections for vroff +.if v .ds ~ \\k:\h'-(\\n(.wu*9/10-\*(#H)'\s-2\u~\d\s+2\h'|\\n:u' +.if v .ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'\v'-.4m'^\v'.4m'\h'|\\n:u' +. \" for low resolution devices (crt and lpr) +.if \n(.H>23 .if \n(.V>19 \ +\{\ +. ds : e +. ds 8 ss +. ds o a +. ds d- d\h'-1'\(ga +. ds D- D\h'-1'\(hy +. ds th \o'bp' +. ds Th \o'LP' +. ds ae ae +. ds Ae AE +.\} +.rm #[ #] #H #V #F C +.\" ======================================================================== +.\" +.IX Title "ACTIVE 5" +.TH ACTIVE 5 "2008-04-06" "INN 2.4.5" "InterNetNews Documentation" +.SH "NAME" +active \- List of newsgroups carried by the server +.SH "DESCRIPTION" +.IX Header "DESCRIPTION" +The file \fIpathdb\fR/active lists the newsgroups carried by \s-1INN\s0. This file +is generally maintained using \fIctlinnd\fR\|(8) to create and remove groups, or +by letting \fIcontrolchan\fR\|(8) do so on the basis of received control messages. +This file should not be edited directly without throttling \fBinnd\fR, and +must be reloaded using \fBctlinnd\fR before \fBinnd\fR is unthrottled. Editing +it directly even with those precautions may make it inconsistent with the +overview database and won't update \fIactive.times\fR, so \fBctlinnd\fR should +be used to make modifications whenever possible. +.PP +Each newsgroup should be listed only once. Each line specifies one group. +The order of groups does not matter. Within each newsgroup, received +articles for that group are assigned monotonically increasing numbers as +unique names. If an article is posted to newsgroups not mentioned in this +file, those newsgroups are ignored. +.PP +If none of the newsgroups listed in the Newsgroups header of an article +are present in this file, the article is either rejected (if \fIwanttrash\fR +is false in \fIinn.conf\fR), or is filed into the newsgroup \f(CW\*(C`junk\*(C'\fR and only +propagated to sites that receive the \f(CW\*(C`junk\*(C'\fR newsgroup (if \fIwanttrash\fR is +true). +.PP +Each line of this file consists of four fields separated by a space: +.PP +.Vb 1 +\& +.Ve +.PP +The first field is the name of the newsgroup. The newsgroup \f(CW\*(C`junk\*(C'\fR is +special, as mentioned above. The newsgroup \f(CW\*(C`control\*(C'\fR and any newsgroups +beginning with \f(CW\*(C`control.\*(C'\fR are also special; control messages are filed +into a control.* newsgroup named after the type of control message if that +group exists, and otherwise are filed into the newsgroup \f(CW\*(C`control\*(C'\fR +(without regard to what newsgroups are listed in the Newsgroups header). +If \fImergetogroups\fR is set to true in \fIinn.conf\fR, newsgroups that begin +with \f(CW\*(C`to.\*(C'\fR are also treated specially; see \fIinnd\fR\|(8). +.PP +The second field is the highest article number that has been used in that +newsgroup. The third field is the lowest article number in the group; +this number is not guaranteed to be accurate, and should only be taken to +be a hint. It is normally updated nightly as part of the expire process; +see \fInews.daily\fR\|(8) and look for \f(CW\*(C`lowmark\*(C'\fR or \f(CW\*(C`renumber\*(C'\fR for more details. +Note that because of article cancellations, there may be gaps in the +numbering sequence. If the lowest article number is greater then the +highest article number, then there are no articles in the newsgroup. In +order to make it possible to update an entry in-place without rewriting +the entire file, the second and third fields are padded out with leading +zeros to make them a fixed width. +.PP +The fourth field contains one of the following flags: +.PP +.Vb 6 +\& y Local postings are allowed. +\& m The group is moderated and all postings must be approved. +\& n No local postings are allowed, only articles from peers. +\& j Articles are filed in the junk group instead. +\& x No local postings and ignored for articles from peers. +\& =foo.bar Articles are filed in the group foo.bar instead. +.Ve +.PP +If a newsgroup has the \f(CW\*(C`j\*(C'\fR flag, no articles will be filed in that +newsgroup, and local postings to that group will be rejected. If an +article for that newsgroup is received from a remote site, and it is not +crossposted to some other valid group, it will be filed into the \f(CW\*(C`junk\*(C'\fR +newsgroup instead. This is different than simply not listing the group, +since the article will still be accepted and can be propagated to other +sites, and the \f(CW\*(C`junk\*(C'\fR group can be made available to readers if wished. +.PP +If the field begins with an equal sign, the newsgroup is an alias. +Articles cannot be posted to that newsgroup, but they can be received from +other sites. Any articles received from peers for that newsgroup are +treated as if they were actually posted to the group named after the equal +sign. Note that the Newsgroups header of the articles are not modified. +(Alias groups are typically used during a transition and are typically +created manually with \fIctlinnd\fR\|(8).) An alias should not point to another +alias. +.SH "HISTORY" +.IX Header "HISTORY" +Written by Rich \f(CW$alz\fR for InterNetNews. Converted to +\&\s-1POD\s0 by Russ Allbery . +.PP +$Id: active.5 7880 2008-06-16 20:37:13Z iulius $ +.SH "SEE ALSO" +.IX Header "SEE ALSO" +\&\fIactive.times\fR\|(5), \fIcontrolchan\fR\|(8), \fIctlinnd\fR\|(8), \fIinn.conf\fR\|(5), \fIinnd\fR\|(8), +\&\fInews.daily\fR\|(8) diff --git a/doc/man/active.times.5 b/doc/man/active.times.5 new file mode 100644 index 0000000..2b95ac5 --- /dev/null +++ b/doc/man/active.times.5 @@ -0,0 +1,163 @@ +.\" Automatically generated by Pod::Man v1.37, Pod::Parser v1.32 +.\" +.\" Standard preamble: +.\" ======================================================================== +.de Sh \" Subsection heading +.br +.if t .Sp +.ne 5 +.PP +\fB\\$1\fR +.PP +.. +.de Sp \" Vertical space (when we can't use .PP) +.if t .sp .5v +.if n .sp +.. +.de Vb \" Begin verbatim text +.ft CW +.nf +.ne \\$1 +.. +.de Ve \" End verbatim text +.ft R +.fi +.. +.\" Set up some character translations and predefined strings. \*(-- will +.\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left +.\" double quote, and \*(R" will give a right double quote. \*(C+ will +.\" give a nicer C++. Capital omega is used to do unbreakable dashes and +.\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff, +.\" nothing in troff, for use with C<>. +.tr \(*W- +.ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p' +.ie n \{\ +. ds -- \(*W- +. ds PI pi +. if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch +. if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch +. ds L" "" +. ds R" "" +. ds C` "" +. ds C' "" +'br\} +.el\{\ +. ds -- \|\(em\| +. ds PI \(*p +. ds L" `` +. ds R" '' +'br\} +.\" +.\" If the F register is turned on, we'll generate index entries on stderr for +.\" titles (.TH), headers (.SH), subsections (.Sh), items (.Ip), and index +.\" entries marked with X<> in POD. Of course, you'll have to process the +.\" output yourself in some meaningful fashion. +.if \nF \{\ +. de IX +. tm Index:\\$1\t\\n%\t"\\$2" +.. +. nr % 0 +. rr F +.\} +.\" +.\" For nroff, turn off justification. Always turn off hyphenation; it makes +.\" way too many mistakes in technical documents. +.hy 0 +.if n .na +.\" +.\" Accent mark definitions (@(#)ms.acc 1.5 88/02/08 SMI; from UCB 4.2). +.\" Fear. Run. Save yourself. No user-serviceable parts. +. \" fudge factors for nroff and troff +.if n \{\ +. ds #H 0 +. ds #V .8m +. ds #F .3m +. ds #[ \f1 +. ds #] \fP +.\} +.if t \{\ +. ds #H ((1u-(\\\\n(.fu%2u))*.13m) +. ds #V .6m +. ds #F 0 +. ds #[ \& +. ds #] \& +.\} +. \" simple accents for nroff and troff +.if n \{\ +. ds ' \& +. ds ` \& +. ds ^ \& +. ds , \& +. ds ~ ~ +. ds / +.\} +.if t \{\ +. ds ' \\k:\h'-(\\n(.wu*8/10-\*(#H)'\'\h"|\\n:u" +. ds ` \\k:\h'-(\\n(.wu*8/10-\*(#H)'\`\h'|\\n:u' +. ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'^\h'|\\n:u' +. ds , \\k:\h'-(\\n(.wu*8/10)',\h'|\\n:u' +. ds ~ \\k:\h'-(\\n(.wu-\*(#H-.1m)'~\h'|\\n:u' +. ds / \\k:\h'-(\\n(.wu*8/10-\*(#H)'\z\(sl\h'|\\n:u' +.\} +. \" troff and (daisy-wheel) nroff accents +.ds : \\k:\h'-(\\n(.wu*8/10-\*(#H+.1m+\*(#F)'\v'-\*(#V'\z.\h'.2m+\*(#F'.\h'|\\n:u'\v'\*(#V' +.ds 8 \h'\*(#H'\(*b\h'-\*(#H' +.ds o \\k:\h'-(\\n(.wu+\w'\(de'u-\*(#H)/2u'\v'-.3n'\*(#[\z\(de\v'.3n'\h'|\\n:u'\*(#] +.ds d- \h'\*(#H'\(pd\h'-\w'~'u'\v'-.25m'\f2\(hy\fP\v'.25m'\h'-\*(#H' +.ds D- D\\k:\h'-\w'D'u'\v'-.11m'\z\(hy\v'.11m'\h'|\\n:u' +.ds th \*(#[\v'.3m'\s+1I\s-1\v'-.3m'\h'-(\w'I'u*2/3)'\s-1o\s+1\*(#] +.ds Th \*(#[\s+2I\s-2\h'-\w'I'u*3/5'\v'-.3m'o\v'.3m'\*(#] +.ds ae a\h'-(\w'a'u*4/10)'e +.ds Ae A\h'-(\w'A'u*4/10)'E +. \" corrections for vroff +.if v .ds ~ \\k:\h'-(\\n(.wu*9/10-\*(#H)'\s-2\u~\d\s+2\h'|\\n:u' +.if v .ds ^ \\k:\h'-(\\n(.wu*10/11-\*(#H)'\v'-.4m'^\v'.4m'\h'|\\n:u' +. \" for low resolution devices (crt and lpr) +.if \n(.H>23 .if \n(.V>19 \ +\{\ +. ds : e +. ds 8 ss +. ds o a +. ds d- d\h'-1'\(ga +. ds D- D\h'-1'\(hy +. ds th \o'bp' +. ds Th \o'LP' +. ds ae ae +. ds Ae AE +.\} +.rm #[ #] #H #V #F C +.\" ======================================================================== +.\" +.IX Title "ACTIVE.TIMES 5" +.TH ACTIVE.TIMES 5 "2008-04-06" "INN 2.4.5" "InterNetNews Documentation" +.SH "NAME" +active.times \- List of local creation times of newsgroups +.SH "DESCRIPTION" +.IX Header "DESCRIPTION" +The file \fIpathdb\fR/active.times provides a chronological record of when +newsgruops were created on the local server. This file is normally +updated by \fBinnd\fR whenever a newgroup control message is processed or a +\&\f(CW\*(C`ctlinnd newgroup\*(C'\fR command is issued, and is used by \fBnnrpd\fR to answer +\&\s-1NEWGROUPS\s0 requests. +.PP +Each line consists of three fields: +.PP +.Vb 1 +\&