sfs_config
--system-wide configuration parameters
sfsrwsd_config
--File server configuration
sfsauthd_config
--User-authentication daemon configuration
sfs_users
--User-authentication database
sfssd_config
--Meta-server configuration
sfs_srp_params
--Default parameters for SRP protocol
sfscd_config
--Meta-client configuration
SFS is a network file system that lets you access your files from anywhere and share them with anyone anywhere. SFS was designed with three goals in mind:
/sfs
. The contents of that directory is identical
on every client in the world. Clients have no notion of administrative
realm and no site-specific configuration options. Servers grant access
to users, not to clients. Thus, users can access their files wherever
they go, from any machine they trust that runs the SFS client software.
SFS achieves these goals by separating key management from file system security. It names file systems by the equivalent of their public keys. Every remote file server is mounted under a directory of the form:
/sfs/Location:HostID
Location is a DNS hostname or an IP address. HostID is a collision-resistant cryptographic hash of Location and the file server's public key. This naming scheme lets an SFS client authenticate a server given only a file name, freeing the client from any reliance on external key management mechanisms. SFS calls the directories on which it mounts file servers self-certifying pathnames.
Self-certifying pathnames let users authenticate servers through a number of different techniques. As a secure, global file system, SFS itself provides a convenient key management infrastructure. Symbolic links let the file namespace double as a key certification namespace. Thus, users can realize many key management schemes using only standard file utilities. Moreover, self-certifying pathnames let people bootstrap one key management mechanism using another, making SFS far more versatile than any file system with built-in key management.
Through a modular implementation, SFS also pushes user authentication out of the file system. Untrusted user processes transparently authenticate users to remote file servers as needed, using protocols opaque to the file system itself.
Finally, SFS separates key revocation from key distribution. Thus, the flexibility SFS provides in key management in no way hinders recovery from compromised keys.
No caffeine was used in the production of the SFS software.
This section describes how to build and install the SFS on your system. If you are too impatient to read the details, be aware of the two most important points:
sfs
user and an sfs
group on your
system. See -with-sfsuser, to use a name other than sfs
.
SFS should run with minimal porting on any system that has solid NFS3 support. We have run SFS successfully on OpenBSD 2.6, FreeBSD 3.3, OSF/1 4.0, and Solaris 5.7.
We have also run SFS with some success on Linux. However, you need a kernel with NFS3 support to run SFS on Linux. The SFS on linux web page has information on installing an SFS-capable Linux kernel.
In order to compile SFS, you will need the following:
gmp.h
header file. Even if you
have libgmp.so, if you don't have /usr/include/gmp.h, you need to
install gmp on your system.
/usr/include
that match the kernel you are
running. Particularly on Linux where the kernel and user-land utilities
are separately maintained, it is easy to patch the kernel without
installing the correspondingly patched system header files in
/usr/include
. SFS needs to see the patched header files to
compile properly.
Once you have setup your system as described in Requirements, you are ready to build SFS.
sfs
. For instance, you might add
the following line to /etc/passwd
:
sfs:*:71:71:Self-certifying file system:/:/bin/true
And the following line to /etc/group
:
sfs:*:71:
Do not put any users in sfs-group, not even root
. Any user
in sfs-group will not be able to make regular use of the SFS file
system. Moreover, having an unprivileged users in sfs-group
causes a security hole.
% gzip -dc sfs-0.5.tar.gz | tar xvf - % cd sfs-0.5
If you determined that you need gmp see Requirements, you should unpack gmp into the top-level of the SFS source tree:
% gzip -dc ../gmp-2.0.2.tar.gz | tar xvf -
CC
and CXX
environment variables to point to the C
and C++ compilers you wish to use to compile SFS. Unless you are using
OpenBSD-2.6, your operating system will not come with a recent enough
version of gcc see Requirements.
./configure
. You may additionally specify the following
options:
--with-sfsuser=sfs-user
sfs
. Do not use an
existing account for sfs-user--even a trusted account--as
processes running with that user ID will not be able to access SFS.
[Note: If you later change your mind about user-name, you do not
need to recompile SFS, sfs_config.]
--with-sfsgroup=sfs-group
--with-gmp=gmp-path
configure
should look for gmp (for example,
gmp-path might be /usr/local
).
--with-sfsdir=sfsdir
/var/sfs
. [You can change this later, sfs_config.]
--with-etcdir=etcdir
/etc/sfs
.
configure
accepts all the traditional GNU configuration
options such as --prefix
. It also has several options that are
only for developers. Do not use the --enable-repo
or
--enable-shlib
options (unless you are a gcc maintainer
looking for some wicked test cases for your compiler).
make
.
make install
. If you are short
on disk space, you can alternatively install stripped binaries by
running make install-strip
.
sfscd
.
The most common problem you will encounter is an internal compiler error
from gcc. If you are not running gcc-2.95.2 or later, you will very
likely experience internal compiler errors when building SFS and need to
upgrade the compiler. You must make clean
after upgrading the
compiler. You cannot link object files together if they have been
created by different versions of the C++ compiler.
On OSF/1 for the alpha, certain functions using a gcc extension called
__attribute__((noreturn))
tend to cause internal compiler errors.
If you experience internal compiler errors when compiling SFS for the
alpha, try building with the command make
ECXXFLAGS='-D__attribute__\(x\)='
instead of simply make
.
Sometimes, a particular source file will give particularly stubborn
internal compiler errors on some architectures. These can be very hard
to work around by just modifying the SFS source code. If you get an
internal compiler error you cannot obviously fix, try compiling the
particular source file with a different level of debugging. (For
example, using a command like make sfsagent.o CXXDEBUG=-g
in the
appropriate subdirectory.)
If your /tmp
file system is too small, you may also end up
running out of temporary disk space while compiling SFS. Set your
TMPDIR
environment variable to point to a directory on a file
system with more free space (e.g., /var/tmp
).
You may need to increase your heap size for the compiler to work. If
you use a csh-derived shell, run the command unlimit datasize
.
If you use a Bourne-like shell, run ulimit -d `ulimit -H -d`
.
On some operating systems, some versions of GMP do not install the
library properly. If you get linker errors about symbols with names
like ___gmp_default_allocate
, try running the command
ranlib /usr/local/lib/libgmp.a
(substituting wherever your GMP library is installed for
/usr/local
).
This chapter gives a brief overview of how to set up an SFS client and server once you have compiled and installed the software.
SFS clients require no configuration. Simply run the program
sfscd
, and a directory /sfs
should appear on your
system. To test your client, access our SFS test server. Type the
following commands:
% cd /sfs/sfs.fs.net:eu4cvv6wcnzscer98yn4qjpjnn9iv6pi % cat CONGRATULATIONS You have set up a working SFS client. %
Note that the /sfs/sfs.fs.net:...
directory does not need to
exist before you run the cd
command. SFS transparently mounts
new servers as you access them.
Setting up an SFS server is a slightly more complicated process. You must perform at least three steps:
/etc/sfs/sfsrwsd_config
configuration file.
localhost
.
To create a public/private key pair for your server, run the command:
mkdir /etc/sfs sfskey gen -P /etc/sfs/sfs_host_key
Then you must create an /etc/sfs/sfsrwsd_config
file based on
which local directories you wish to export and what names those
directories should have on clients. This information takes the form of
one or more Export
directives in the configuration file. Each
export directive is a line of the form:
Export local-directory sfs-name
local-directory is the name of a local directory on your system
you wish to export. sfs-name is the name you wish that directory
to have in SFS, relative to the previous Export
directives.
The sfs-name of the first Export
directive must be
/
. Subsequent sfs-names must correspond to pathnames that
already exist in the previously exported directories.
Suppose, for instance, that you wish to export two directories,
/disk/u1
and /disk/u2
as /usr1
and /usr2
,
respectively. You should create a directory to be the root of the
exported namespace, say /var/sfs/root
, create the necessary
sfs-name subdirectories, and create a corresponding
sfsrwsd_config
file. You might run the following commands to do
this:
% mkdir /var/sfs/root % mkdir /var/sfs/root/usr1 % mkdir /var/sfs/root/usr2
and create the following sfsrwsd_config
file:
Export /var/sfs/root / Export /disk/u1 /usr1 Export /disk/u2 /usr2
Finally, you must export all the local-directorys in your
sfsrwsd_config
to localhost
via NFS version 3. The
details of doing this depend heavily on your operating system. For
instance, in OpenBSD you must add the following lines to the file
/etc/exports
and run the command kill -HUP `cat
/var/run/mountd.pid`
:
/var/sfs/root localhost /disk/u1 localhost /disk/u2 localhost
On Linux, the syntax for the exports file is:
/var/sfs/root localhost(rw) /disk/u1 localhost(rw) /disk/u2 localhost(rw)
On Solaris, add the following lines to the file /etc/dfs/dfstab
and run exportfs -a
:
share -F nfs -o -rw=localhost /var/sfs/root share -F nfs -o -rw=localhost /disk/u1 share -F nfs -o -rw=localhost /disk/u2
In general, the procedure for exporting NFS file systems varies greatly
between operating systems. Check your operating system's NFS
documentation for details. (The manual page for mountd
is a
good place to start.)
Once you have generated a host key, created an sfsrwsd_config
file, and reconfigured your NFS server, you can start the SFS server by
running sfssd
. Note that a lot can go wrong in setting up an
SFS server. Thus, we recommend that you first run sfssd -d
. The
-d
switch will leave sfssd
in the foreground and send
error messages to your terminal. If there are problems, you can then
easily kill sfssd
from your terminal, fix the problems, and
start again. Once things are working, omit the -d
flag;
sfssd
will run in the background and send its output to the
system log.
Note: You will not be able to access an SFS server using the
same machine as a client unless you run sfscd
with the
-l
flag, sfscd. Attempts to SFS mount a machine on
itself will return the error EDEADLK
(Resource deadlock avoided).
To access an SFS server, you must first register a public key with the
server, then run the program sfsagent
on your SFS client to
authenticate you.
To register a public key, log into the file server and run the command:
sfskey register
This will create a public/private key pair for you and register it with
the server. (Note that if you already have a public key on another
server, you can reuse that public key by giving sfskey
your
address at that server, e.g., sfskey register
user@other.server.com
.)
After registering your public key with an SFS server, you must run the
sfsagent
program on an SFS client to access the server. On
the client, run the command:
sfsagent user@server
server is the name of the server on which you registered, and
user is your logname on that server. This command does three
things: It runs the sfsagent
program, which persists in the
background to authenticate you to file servers as needed. It fetches
your private key from server and decrypts it using your
passphrase. Finally, it fetches the server's public key, and creates a
symbolic link from /sfs/server
to
/sfs/server:HostID
.
If, after your agent is already running, you wish to fetch a private key from another server or download another server's public key, you can run the command:
sfskey add user@server
In fact, sfsagent
runs this exact command for you when you
initially start it up.
While sfskey
provides a convenient way of obtaining servers'
HostIDs, it is by no means the only way. Once you have access to
one SFS file server, you can store on it symbolic links pointing to
other servers' self-certifying pathnames. If you use the same public
key on all servers, then, you will only need to type your password
once. sfsagent
will automatically authenticate you to
whatever file servers you touch.
When you are done using SFS, you should run the command
sfskey kill
before logging out. This will kill your sfsagent
process
running in the background and get rid of the private keys it was holding
for you in memory.
sfskey--+---------------- - - - -----------+ | | agent--+ | agent------+ | | | +---------------+ +-------------+ | sfscd |-------- - - - --------| sfssd | | | | | | | | sfsrwcd-+ | | +-sfsrwsd--+-+ | nfsmounter-+ | | +-sfsauthd | | +---------------+ +-------------+ | | V +--------+ | +--------+ | kernel | | | kernel | | NFS3 |<-----+ | NFS3 | | client | | server | +--------+ +--------+ CLIENT SERVERSFS consists of a number interacting programs on both the client and the server side.
On the client side, SFS implements a file system by pretending to be an
NFS server and talking to the local operating system's NFS3 client. The
program sfscd
gets run by root (typically at boot time).
sfscd
spawns two other daemons--nfsmounter
and
sfsrwcd
.
nfsmounter
handles the mounting and unmounting of NFS file
systems. In the event that sfscd
dies, nfsmounter
takes over being the NFS server to prevent file system operations from
blocking as it tries to unmount all file systems. Never send
nfsmounter
a SIGKILL
signal (i.e., kill -9
).
nfsmounter
's main purpose is to clean up the mess if any other
part of the SFS client software fails. Whatever bad situation SFS has
gotten your machine into, killing nfsmounter
can only make
matters worse.
sfsrwcd
implements the ordinary read-write file system
protocol. As other dialects of the SFS protocol become available, they
will be implemented as daemons running alongside sfsrwcd
.
Each user of an SFS client machine must run an instance of the
sfsagent
command. sfsagent
serves several purposes.
It handles user authentication as the user touches new file systems. It
can fetch HostIDs on the fly, a mechanism called Dynamic
server authentication. Finally, it can perform revocation checks on
the HostIDs of servers the user accesses, to ensure the user does
not access HostIDs corresponding to compromised private keys.
The sfskey
utility manages both user and server keys. It lets
users control and configure their agents. Users can hand new private
keys to their agents using sfskey
, list keys the agent holds,
and delete keys. sfskey
will fetch keys from remote servers
using SRP, SRP. It lets users change their public keys on remote
servers. Finally, sfskey
can configure the agent for dynamic
server authentication and revocation checking.
On the server side, the program sfssd
spawns two subsidiary
daemons, sfsrwsd
and sfsauthd
. If virtual hosts or
multiple versions of the software are running, sfssd
may spawn
multiple instances of each daemon. sfssd
listens for TCP
connections on port 4. It then hands each connection off to one of the
subsidiary daemons, depending on the self-certifying pathname and
service requested by the client.
sfsrwsd
is the server-side counterpart to sfsrwcd
.
It communicates with client side sfsrwcd
processes using the
SFS file system protocol, and accesses the local disk by acting as a
client of the local operating system's NFS server. sfsrwsd
is
the one program in sfs that must be configured before you run it,
sfsrwsd_config.
sfsauthd
handles user authentication. It communicates
directly with sfsrwsd
to authenticate users of the file system.
It also accepts connections over the network from sfskey
to
let users download their private keys or change their public keys.
SFS comprises a number of programs, many of which have configuration
files. All programs look for configuration files in two
directories--first /etc/sfs
, then, if they don't find the file
there, in /usr/local/share/sfs
. You can change these locations
using the --with-etcdir
and --with-datadir
options to
the configure
command, configure.
The SFS software distribution installs reasonable defaults in
/usr/local/share/sfs
for all configuration files except
sfsrwsd_config
. On particular hosts where you wish to change the
default behavior, you can override the default configuration file by
creating a new file of the same name in /etc/sfs
.
The sfs_config
file contains system-wide configuration parameters
for most of the programs comprising SFS. Note that
/usr/local/share/sfs/sfs_config
is always parsed, even if
/etc/sfs/sfs_config
exists. Options in
/etc/sfs/sfs_config
simply override the defaults in
/usr/local/share/sfs/sfs_config
. For the other configuration
files, a file in /etc/sfs
entirely overrides the version in
/usr/local
.
If you are running a server, you will need to create an
sfsrwsd_config
file to tell SFS what directories to export, and
possibly an sfsauthd_config
if you wish to share the database of
user public keys across several file servers.
The sfssd_config
file contains information about which protocols
and services to route to which daemons on an SFS server, including
support for backwards compatibility across several versions of SFS. You
probably don't need to change this file.
sfs_srp_params
contains some cryptographic parameters for
retrieving keys securely over the network with a passphrase (as with the
sfskey add usr@server
command).
sfscd_config
Contains information about extensions to the SFS
protocol and which kinds of file servers to route to which daemons. You
almost certainly should not touch this file unless you are developing
new versions of the SFS software.
Note that configuration command names are case-insensitive in all configuration files (though the arguments are not).
sfs_config
--system-wide configuration parametersThe sfs_config
file lets you set the following system-wide
parameters:
sfsdir directory
/var/sfs
, unless you changed this with the --with-sfsdir
option to configure
.
sfsuser sfs-user [sfs-group]
sfs
and sfs-group
is the same as sfs-user. The sfsuser
directive lets you
supply either a user and group name, or numeric IDs to change the
default. Note: If you change sfs-group, you must make
sure the the program /usr/local/lib/sfs/suidconnect
is setgid to
the new sfs-group.
anonuser {user | uid gid}
sfs_config
file specifies the user name nobody.
ResvGids low-gid high-gid
sfsagent
program.
However, it needs to modify processes' group lists so as to know which
file system requests correspond to which agents. The ResvGids
directive gives SFS a range of group IDs it can use to tag processes
corresponding to a particular agent. (Typically, a range of 16 gids
should be plenty.) Note that the range is inclusive--both
low-gid and high-gid are considered reserved gids.
The setuid root program /usr/local/lib/sfs/newaid
lets users take
on any of these group IDs. Thus, make sure these groups are not used
for anything else, or you will create a security hole. There is no
default for ResvGids
.
PubKeySize bits
PwdCost cost
sfskey
command will be unusable.
LogPriority facility.level
daemon.notice
.
sfsrwsd_config
--File server configurationHostname name
Keyfile path
sfsrwsd
to look for its private key in file path.
The default is sfs_host_key
. SFS looks for file names that do
not start with /
in /etc/sfs
, or whatever directory you
specified if you used the --with-etcdir
option to
configure
(see configure).
Export local-directory sfs-name [R|W]
sfsrwsd
to export local-directory, giving it the
name sfs-name with respect to the server's self-certifying
pathname. Appending R
to an export directive gives anonymous
users read-only access to the file system (under user ID -2 and group ID
-2). Appending W
gives anonymous users both read and write
access. See Quick server setup, for an example of the Export
directive.
There is almost no reason to use the W
flag. The R
flag
lets anyone on the Internet issue NFS calls to your kernel as user -2.
SFS filters these calls; it makes sure that they operate on files
covered by the export directive, and it blocks any calls that would
modify the file system. This approach is safe given a perfect NFS3
implementation. If, however, there are bugs in your NFS code, attackers
may exploit them if you have the R
option--probably just
crashing your server but possibly doing worse.
LeaseTime seconds
sfsauthd_config
--User-authentication daemon configurationHostname name
Keyfile path
sfsrwsd
to look for its private key in file path.
The default is sfs_host_key
. SFS looks for file names that do
not start with /
in /etc/sfs
, or whatever directory you
specified if you used the --with-etcdir
option to
configure
(see configure).
Userfile [-ro|-reg] [-pub=pubpath] [-mapall=user] path
sfsauthd
should look for user
public keys when authenticating users. You can specify multiple
Userfile
directives to use multiple files. This can be useful in
an environment where most user accounts are centrally maintained, but a
particular server has a few locally-maintained guest (or root) accounts.
Userfile has the following options:
-ro
sfsauthd
will not allow users in a read-only database
to update their public keys. It also assumes that read-only databases
reside on other machines. Thus, it maintains local copies of read-only
databases in /var/sfs/authdb
. This process ensures that
temporarily unavailable file servers never disrupt sfsauthd
's
operation.
-reg
Userfile
can have the -reg
option. -reg
and -ro
are mutually exclusive.
-pub=pubpath
sfsauthd
supports the secure remote password protocol, or SRP.
SRP lets users connect securely to sfsauthd
with their
passwords, without needing to remember the server's public key. To
prove its identity through SRP, the server must store secret data
derived from a user's password. The file path specified in
Userfile
contains these secrets for users opting to use SRP. The
-pub
option tells sfsauthd
to maintain in
pubpath a separate copy of the database without secret
information. pubpath might reside on an anonymously readable SFS
file system--other machines can then import the file as a read-only
database using the -ro
option.
-mapall=user
If no Userfile
directive is specified, sfsauthd
uses
the following default (again, unqualified names are assumed to be in
/etc/sfs
):
Userfile -reg -pub=sfs_users.pub sfs_users
SRPfile path
sfs_srp_params
.
Denyfile path
sfskey register
. The default is sfs_deny
.
sfs_users
--User-authentication databaseThe sfs_users
file, maintained and used by the sfsauthd
program, maps public keys to local users. It is roughly analogous to
the Unix /etc/passwd
file. Each line of sfs_users
has the
following format:
user:public-key:credentials:SRP-info:private-key
dm/root
, kaminsky/root
, etc.).
sfskey add
command to
fetch the wrong HostID. Note also that SRP-info is specific
to a particular hostname. If you change the Location of a file
server, users will need to register new SRP-info.
sfsauthd
. It is
private, per-user data that sfsauthd
will return to users who
successfully complete the SRP protocol. Currently, sfskey
users this field to store an encrypted copy of a user's private key,
allowing the user to retrieve the private key over the network.
sfssd_config
--Meta-server configurationsfssd_config
configures sfssd
, the server that accepts
connections for sfsrwsd
and sfsauthd
.
sfssd_config
can be used to run multiple "virtual servers", or
to run several versions of the server software for compatibility with
old clients.
Directives are:
BindAddr ip-addr [port]
sfssd
should listen
for TCP connections. The default is INADDR_ANY
for the address
and port 4.
RevocationDir path
sfssd
should search for
revocation/redirection certificates when clients connect to unknown
(potentially revoked) self-certifying pathnames. The default value is
/var/sfs/srvrevoke
. Use the command sfskey revokegen
to
generate revocation certificates.
HashCost bits
Server {* | Location[:HostID]}
:
HostID. If
:
HostID is omitted, then the following lines apply to any
connection that does not match an explicit HostID in another
Server
. The argument *
applies to all clients who do not
have a better match for either Location or HostID.
Release {* | sfs-version}
*
signifies arbitrarily large SFS
release numbers. The Release
directive does not do anything on
its own, but applies to all subsequent Service
directives until
the next Release
or Server
directive.
Extensions ext1 [ext2 ...]
Service
directives apply only to
clients that supply all of the listed extension strings (ext1,
...). Extensions
until the next Extensions
,
Release
or Server
directive
Service srvno daemon [arg ...]
1. File server 2. Authentication server 3. Remote execution (not yet released) 4. SFS/HTTP (not yet released)
The default contents of sfssd_config
is:
Server * Release * Service 1 sfsrwsd Service 2 sfsauthd
To run a different server for sfs-0.3 and older clients, you could add the lines:
Release 0.3 Service 1 /usr/local/lib/sfs-0.3/sfsrwsd
sfs_srp_params
--Default parameters for SRP protocolSpecifies a "strong prime" and a generator for use in the SRP
protocol. SFS ships with a particular set of parameters because
generating new ones can take a considerable amount of CPU time. You can
replace these parameters with randomly generated ones using the
sfskey srpgen -b bits
command.
Note that SRP parameters can afford to be slightly shorter than Rabin
public keys, both because SRP is based on discrete logs rather than
factoring, and because SRP is used for authentication, not secrecy.
1,024 is a good value for bits even if PubKeySize
is
slightly larger in sfs_config
.
sfscd_config
--Meta-client configurationThe sfscd_config
is really part of the SFS protocol
specification. If you change it, you will no longer be executing the
SFS protocol. Nonetheless, you need to do this to innovate, and SFS was
designed to make implementing new kinds of file systems easy.
sfscd_config
takes the following directives:
Extension string
sfscd
should send string to all servers
to advertise that it runs an extension of the protocol. Most servers
will ignore string, but those that support the extension can
pass off the connection to a new "extended" server daemon. You can
specify multiple Extension
directives.
Protocol name daemon [arg ...]
/sfs/name:anything
should be handled by the
client daemon daemon. name may not contain any
non-alphanumeric characters. The Protocol
directive is useful
for implementing file systems that are not mounted on self-certifying
file systems.
Release {* | sfs-version}
*
signifies arbitrarily large SFS
release numbers. The Release
directive does not do anything on
its own, but applies to all subsequent Program
directives until
the next Release
directive.
Libdir path
/
. The default is
/usr/local/lib/sfs-0.5
. The Libdir
directive
does not do anything on its own, but applies to all subsequent
Program
directives until the next Libdir
or Release
directive.
Program prog.vers daemon [arg ...]
Program
directive must be preceded by a Release
directive.
The default sfscd_config
file is:
Release * Program 344444.3 sfsrwcd
To run a different set of daemons when talking to sfs-0.3 or older servers, you could add the following lines:
Release 0.3 Libdir /usr/local/lib/sfs-0.3 Program 344444.3 sfsrwcd
sfsagent
reference guidesfsagent
is the program users run to authenticate themselves
to remote file servers, to create symbolic links in /sfs
on the
fly, and to look for revocation certificates. Many of the features in
sfsagent
are controlled by the sfskey
program and
described in the sfskey
documentation.
Ordinarily, a user runs sfsagent
at the start of a session.
sfsagent
runs sfskey add
to obtain a private key.
As the user touches each SFS file server for the first time, the agent
authenticates the user to the file server transparently using the
private key it has. At the end of the session, the user should run
sfskey kill
to kill the agent.
The usage is as follows:
sfsagent [-dnkF] -S sock [-c [prog [arg ...]] | keyname]
-d
-n
-n
, you must also use
the -S
option, otherwise your agent will be useless as there
will be no way to communicate with it.
-k
sfsagent
will refuse to run again.
-F
-S sock
sfskey
on the Unix
domain socket sock. Ordinarily sfskey
connects to the
agent through the client file system software, but it can use a named
Unix domain socket as well.
-c [prog [arg ...]]
sfsagent
on startup runs the command sfskey
add
giving it whatever -t
option and keyname you
specified. This allows you to fetch your first key as you start or
restart the agent. If you wish to run a different program, you can
specify it using -c
. You might, for instance, wish to run a
shell-script that executes a sfskey add
followed by several
sfskey certprog
commands.
sfsagent
runs the program with the environment variable
SFS_AGENTSOCK
set to -0
and a Unix domain socket on
standard input. Thus, when atomically killing and restarting the agent
using -k
, the commands run by sfsagent
talk to the
new agent and not the old.
If you don't wish to run any program at all when starting
sfsagent
, simply supply the -c
option with no
prog. This will start an new agent that has no private keys.
sfskey
reference guideThe sfskey
command performs a variety of key management tasks,
from generating and updating keys to controlling users' SFS agents. The
general usage for sfskey
is:
sfskey [-S sock] [-p pwfd] command [arg ...]
-S
specifies a UNIX domain socket sfskey
can use to
communicate with your sfsagent
socket. If sock begins
with -
, the remainder is interpreted as a file descriptor number.
The default is to use the environment variable SFS_AGENTSOCK
if
that exists. If not, sfskey
asks the file system for a
connection to the agent.
The -p
option specifies a file descriptor from which
sfskey
should read a passphrase, if it needs one, instead of
attempting to read it from the user's terminal. This option may be
convenient for scripts that invoke sfskey
. For operations
that need multiple passphrases, you must specify the -p
option
multiple times, once for each passphrase.
sfskey add [-t [hrs:]min] [keyfile]
sfskey add [-t [hrs:]min] [user]@hostname
add
command loads and decrypts a private key, and gives
the key to your agent. Your agent will use it to try to authenticate
you to any file systems you reference. The -t
option specifies
a timeout after which the agent should forget the private key.
In the first form of the command, the key is loaded from file
keyfile. The default for keyfile, if omitted, is
$HOME/.sfs/identity
.
The second form of the command fetches a private key over the network using the SRP protocol. SRP lets users establish a secure connection to a server without remembering its public key. Instead, to prove their identities to each other, the user remembers a secret password and the server stores a one-way function of the password (also a secret). SRP addresses the fact that passwords are often poorly chosen; it ensures that an attacker impersonating one of the two parties cannot learn enough information to mount an off-line password guessing attack--in other words, the attacker must interact with the server or user on every attempt to guess the password.
The sfskey update
and register
commands let users
store their private keys on servers, and retrieve them using the
add
command. The private key is stored in encrypted form,
using the same password as the SRP protocol (a safe design as the server
never sees any password-equivalent data).
Because the second form of sfskey add
establishes a secure
connection to a server, it also downloads the servers HostID securely
and creates a symbolic link from /sfs/
hostname to the
server's self-certifying pathname.
When invoking sfskey add
with the SRP syntax, sfskey
will ask for the user's password with a prompt of the following form:
Passphrase for user@servername/nbits:
user is simply the username of the key being fetched from the
server. servername is the name of the server on which the user
registerd his SRP information. It may not be the same as the
hostname argument to sfskey
if the user has supplied a
hostname alias (or CNAME) to sfskey add
. Finally, nbits
is the size of the prime number used in the SRP protocol. Higher values
are more secure; 1,024 bits should be adequate. However, users should
expect always to see the same value for nbits (otherwise, someone
may be trying to impersonate the server).
sfskey certclear
sfskey certlist [-q]
sfskey certprog [-s suffix] [-f filter] [-e exclude] prog [arg ...]
certprog
command registers a command to be run to lookup
HostIDs on the fly in the /sfs
directory. This mechanism can be
used for dynamic server authentication--running code to lookup
HostIDs on-demand. When you reference the file
/sfs/name.suffix
, your agent will run the command:
prog arg ... name
If the program succeeds and prints dest to its standard output, the agent will then create a symbolic link:
/sfs/name.suffix -> dest
If the -s
flag is omitted, then neither .
nor
suffix gets appended to name. In other words, the link is
/sfs/name -> dest
. filter is a perl-style
regular expression. If it is specified, then name must contain it
for the agent to run prog. exclude is another regular
expression, which, if specified, prevents the agent from running
prog on names that contain it (regardless of filter).
The program dirsearch
can be used with certprog
to
configure certification paths--lists of directories in which to
look for symbolic links to HostIDs. The usage is:
dirsearch [-clpq] dir1 [dir2 ...] name
dirsearch
searches through a list of directories dir1,
dir2, ... until it finds one containing a file called
name, then prints the pathname dir/name
. If it
does not find a file, dirsearch
exits with a non-zero exit
code. The following options affect dirsearch
's behavior:
-c
-l
dir/name
be a symbolic link, and print
the path of the link's destination, rather than the path of the link
itself.
-p
dir/name
. This is the default
behavior anyway, so the option -p
has no effect.
-q
As an example, to lookup self-certifying pathnames in the directories
$HOME/.sfs/known_hosts
and /mit
, but only accepting links
in /mit
with names ending .mit.edu
, you might execute the
following commands:
% sfskey certprog dirsearch $HOME/.sfs/known_hosts % sfskey certprog -f '\.mit\.edu$' /mnt/links
sfskey delete keyname
add
command).
sfskey deleteall
sfskey edit [-P] [-o outfile] [-c cost] [-n name] [keyname]
keyname can be a file name, or it can be of the form
[user]@server
, in which case sfskey
will
fetch the key remotely and outfile must be specified. If
keyname is unspecified, the default is $HOME/.sfs/identity
.
The options are:
-P
-o outfile
-c cost
PwdCost
, pwdcost.
-n name
sfskey list
.
sfskey gen [-KP] [-b nbits] [-c cost] [-n name] [keyfile]
$HOME/.sfs/identity
.
-K
sfskey gen
asks the user to type random text with
which to seed the random number generator. The -K
option
suppresses that behavior.
-P
sfskey gen
should not ask for a passphrase and
the new key should be written to disk in unencrypted form.
-b nbits
-c cost
PwdCost
, pwdcost.
-n name
sfskey list
.
Otherwise, the user will be prompted for a name.
sfskey help
sfskey
commands and their usage.
sfskey hostid hostname
sfskey hostid -
Location:HostID
to standard output. If
hostname is simply -
, returns the name of the current
machine, which is not insecure.
sfskey kill
sfskey list [-ql]
-q
-l
sfskey norevokeset HostID ...
sfskey norevokelist
sfskey register [-KS] [-b nbits] [-c cost] [-u user] [key]
sfskey register
command lets users who are logged into an
SFS file server register their public keys with the file server for the
first time. Subsequent changes to their public keys can be
authenticated with the old key, and must be performed using
sfskey update
. The superuser can also use
sfskey register
when creating accounts.
key is the private key to use. If key does not exist and is
a pathname, sfskey
will create it. The default key is
$HOME/.sfs/identity
, unless -u
is used, in which case
the default is to generate a new key but not store it anywhere. If a
user wishes to reuse a public key already registered with another
server, the user can specify user@server
for
key.
-K
-b nbits
-c cost
sfskey gen
. -K
and
-b
have no effect if the key already exists.
-S
-u user
sfskey register
is run as root, specifies a particular
user to register. This can be useful when creating accounts for people.
sfsauthd_config
must have a Userfile
with the
-reg
option to enable use of the sfskey register
,
sfsauthd_config.
sfskey reset
/sfs
directory, including all symbolic
links created by sfskey certprog
and sfskey add
, and
log the user out of all file systems.
Note that this is not the same as deleting private keys held by the
agent (use deleteall
for that). In particular, the effect of
logging the user out of all file systems will likely not be
visible--the user will automatically be logged in again on-demand.
sfskey revokegen [-r newkeyfile [-n newhost]] [-o oldhost] oldkeyfile
sfskey revokelist
sfskey revokeclear
sfskey revokeprog [-b [-f filter] [-e exclude]] prog [arg ...]
sfskey srpgen [-b nbits] file
sfs_srp_params
file, sfs_srp_params.
sfskey update [-S | -s srp_params] [-a {server | -}] oldkey [newkey]
$HOME/.sfs/identity
.
To change public keys, typically a user should generate a new public key
and store it in $HOME/.sfs/identity
. Then he can run
sfskey update [user]@host
for each server on which
he needs to change his public key.
Several options control sfskey update
's behavior:
-S
-s
sfskey
srpgen
, and specifies the parameters to use in generating SRP
information for the server. The default is to get SRP parameters from
the server, or look in /usr/local/etc/sfs/sfs_srp_params
.
-a server
-a -
Location:HostID
. A server of
-
means to use the local host. You can specify the -a
option multiple times, in which case sfskey
will attempt to
change oldkey to newkey on multiple servers in parallel.
If oldkey is the name of a remote key--i.e. of the form
[user]@host
--then the default value of server
is to use whatever server successfully completes the SRP authentication
protocol while fetching oldkey. Otherwise, if oldkey is a
file, the -a
option is mandatory.
ssu
commandThe ssu
command allows an unprivileged user to become root on
the local machine without changing his SFS credentials. ssu
invokes the command su
to become root. Thus, the access and
password checks needed to become root are identical to those of the
local operating system's su
command. ssu
also
runs /usr/local/lib/sfs-0.5/newaid
to alter the group
list so that SFS can recognize the root shell as belonging to the
original user.
The usage is as follows:
ssu [-f | -m | -l | -c command]
-f
-m
su
command.
-l
-c command
ssu
to tell su
to run command rather
than running a shell.
Note, ssu
does not work on some versions of Linux because of a
bug in Linux. To see if this bug is present, run the command su
root -c ps
. If this command stops with a signal, your su
command is broken and you cannot use ssu
.
sfscd
commandsfscd [-d] [-l] [-L] [-f config-file]
sfscd
is the program to create and serve the /sfs
directory on a client machine. Ordinarily, you should not need to
configure sfscd
or give it any command-line options.
-d
-l
sfscd
will disallow access to a server running on
the same host. If the Location in a self-certifying pathname
resolves to an IP address of the local machine, any accesses to that
pathname will fail with the error EDEADLK
("Resource deadlock
avoided").
The reason for this behavior is that SFS is implemented using NFS. Many
operating systems can deadlock when there is a cycle in the mount
graph--in other words when two machines NFS mount each other, or, more
importantly when a machine NFS mounts itself. To allow a machine to
mount itself, you can run sfscd
with the -l
flag.
This may in fact work fine and not cause deadlock on non-BSD systems.
-L
-L
option disables a number of kludges that work
around bugs in the kernel. -L
is useful for people interested
in improving Linux's NFS support.
-f config-file
sfscd
configuration file,
sfscd_config. The default, if -f
is unspecified, is
first to look for /etc/sfs/sfscd_config
, then
/usr/local/etc/sfs/sfscd_config
.
sfssd
commandsfssd [-d] [-f config-file]
sfssd
is the main server daemon run on SFS servers.
sfssd
itself does not serve any file systems. Rather, it acts
as a meta-server, accepting connections on TCP port 4 and passing them
off to the appropriate daemon. Ordinarily, sfssd
passes all
file system connections to sfsrwsd
, and all user-key
management connections to sfsauthd
. However, the
sfssd_config
file (see sfssd_config) allows a great deal of
customization, including support for "virtual servers," multiple
versions of the SFS software coexisting, and new SFS-related services
other than the file system and user authentication.
-d
-f config-file
sfssd
configuration file,
sfssd_config. The default, if -f
is unspecified, is
first to look for /etc/sfs/sfssd_config
, then
/usr/local/etc/sfs/sfssd_config
.
sfsrwsd
command/usr/local/lib/sfs-0.5/sfsrwsd [-f config-file]
sfsrwsd
is the program implementing the SFS read-write server.
Ordinarily, you should never run sfsrwsd
directly, but rather
have sfssd
do so. Nonetheless, you must create a
configuration file for sfsrwsd
before running an SFS server.
See sfsrwsd_config, for what to put in your sfsrwsd_config
file.
-f config-file
sfsrwsd
configuration file,
sfsrwsd_config. The default, if -f
is unspecified, is
/etc/sfs/sfsrwsd_config
.
SFS shares files between machines using cryptographically protected communication. As such, SFS can help eliminate security holes associated with insecure network file systems and let users share files where they could not do so before.
That said, there will very likely be security holes attackers can exploit because of SFS, that they could not have exploited otherwise. This chapter enumerates some of the security consequences of running SFS. The first section describes vulnerabilities that may result from the very existence of a global file system. The next section lists bugs potentially present in your operating system that may be much easier for attackers to exploit if you run SFS. Finally the last section attempts to point out weak points of the SFS implementation that may lead to vulnerabilities in the SFS software itself.
Many security holes can be exploited much more easily if the attacker
can create an arbitrary file on your system. As a simple example, if a
bug allows attackers to run any program on your machine, SFS allows them
to supply the program somewhere under /sfs
. Moreover, the file
can have any numeric user and group (though of course, SFS disables
setuid and devices).
.
in path
Another potential problem users putting the current working directory
.
in their PATH environment variables. If you are browsing
a file system whose owner you do not trust, that owner can run arbitrary
code as you by creating programs named things like ls
in the
directories you are browsing. Putting .
in the PATH has
always been a bad idea for security, but a global file system like SFS
makes it much worse.
Users need to be careful about using untrusted file systems as if they were trusted file systems. Any file system can name files in any other file system by symbolic links. Thus, when randomly overwriting files in a file system you do not trust, you can be tricked, by symbolic links, into overwriting files on the local disk or another SFS file system.
As an example of a seemingly appealing use of SFS that can cause
problems, consider doing a cvs
checkout from an untrusted CVS
repository, so as to peruse someone else's source code. If you run
cvs
on a repository you do not trust, the person hosting the
repository could replace the CVSROOT/history
with a symbolic
link to a file on some other file system, and cause you to append
garbage to that file.
This cvs
example may or may not be a problem. For instance,
if you are about to compile and run the software anyway, you are placing
quite a bit of trust in the person running the CVS repository anyway.
The important thing to keep in mind is that for most uses of a file
system, you are placing some amount of trust in in the file server.
See resvgids, to see how users can run multiple agents with the
newaid
command. One way to cut down on trust is to access
untrusted file servers under a different agent with different private
keys. Nonetheless, this still allows the remote file servers to serve
symbolic links to the local file system in unexpected places.
Any user on the Internet can get the attributes of a
local-directory listed in an Export
directive
(see export). This is so users can run commands like ls -ld
on a self-certifying pathname in /sfs
, even if they cannot change
directory to that pathname or list files under it. If you wish to keep
attribute information secret on a local-directory, you will need
to export a higher directory. We may later reevaluate this design
decision, though allowing such anonymous users to get attributes
currently simplifies the client implementation.
The SFS read-write server software requires each SFS server to run an NFS server. Running an NFS server at all can constitute a security hole. In order to understand the full implications of running an SFS server, you must also understand NFS security.
NFS security relies on the secrecy of file handles. Each file on an
exported file system has associated with it an NFS file handle
(typically 24 to 32 bytes long). When mounting an NFS file system, the
mount
command on the client machine connects to a program
called mountd
on the server and asks for the file handle of
the root of the exported file system. mountd
enforces access
control by refusing to return this file handle to clients not authorized
to mount the file system.
Once a client has the file handle of a directory on the server, it sends NFS requests directly to the NFS server's kernel. The kernel performs no access control on the request (other than checking that the user the client claims to speak for has permission to perform the requested operation). The expectation is that all clients are trusted to speak for all users, and no machine can obtain a valid NFS file handle without being an authorized NFS client.
To prevent attackers from learning NFS file handles when using SFS, SFS encrypts all NFS file handles with a 20-byte key using the Blowfish encryption algorithm. Unfortunately, not all operating systems choose particularly good NFS file handles in the first place. Thus, attackers may be able to guess your file handles anyway. In general, NFS file handles contain the following 32-bit words:
In addition NFS file handles can contain the following words:
Many of these words can be guessed outright by attackers without their needing to interact with any piece of software on the NFS server. For instance, the file system ID is often just the device number on which the physical file system resides. The i-number of the root directory in a file system is always 2. The i-number and generation number of the root directory can also be used as the i-number and generation number of the "exported directory".
On some operating systems, then, the only hard thing for an attacker to guess is the 32-bit generation number of some directory on the system. Worse yet, the generation numbers are sometimes not chosen with a good random number generator.
To minimize the risks of running an NFS server, you might consider taking the following precautions:
fsirand
that
re-randomizes all generation numbers in a file system. Running
fsirand
may result in much better generation numbers than,
say, a factory install of an operating system.
localhost
for SFS, but read-only to any client on which an
attacker may have learned an NFS file handle, you may be able to protect
the integrity of your file system under attack. (Note, however, that
unless you filter forged packets at your firewall, the attacker can put
whatever source address he wants on an NFS UDP packet.) See the
mountd
or exports
manual page for more detail.
Note: under no circumstances should you make your file system
"read-only to the world," as this will let anyone find out NFS file
handles. You want the kernel to think of the file system as read-only
for the world, but mountd
to refuse to give out file handles
to anybody but localhost
.
mountd -n
.The mountd
command takes a flag -n
meaning "allow
mount requests from unprivileged ports." Do not ever run use
this flag. Worse yet, some operating systems (notably HP-UX 9) always
exhibit this behavior regardless of whether they -n
flag has
been specified.
The -n
option to mountd
allows any user on an NFS
client to learn file handles and thus act as any other user. The
situation gets considerably worse when exporting file systems to
localhost
, however, as SFS requires. Then everybody on the
Internet can learn your NFS file handles. The reason is that the
portmap
command will forward mount requests and make them
appear to come from localhost
.
portmap
forwardingIn order to support broadcast RPCs, the portmap
program will
relay RPC requests to the machine it is running on, making them appear
to come from localhost
. That can have disastrous consequences in
conjunction with mountd -n
as described previously. It can also
be used to work around "read-mostly" export options by forwarding NFS
requests to the kernel from localhost
.
Operating systems are starting to ship with portmap
programs
that refuse to forward certain RPC calls including mount and NFS
requests. Wietse Venema has also written a portmap
replacement that has these properties, available from
ftp://ftp.porcupine.org/pub/security/index.html. It is also a
good idea to filter TCP and UDP ports 111 (portmap
) at your
firewall, if you have one.
Many NFS implementations have bugs. Many of those bugs rarely surface
when clients and servers with similar implementation talk to each other.
Examples of bugs we've found include servers crashing when the receive a
write request for an odd number of bytes, clients crashing when they
receive the error NFS3ERR_JUKEBOX
, and clients using
uninitialized memory when the server returns a lookup3resok
data
structure with obj_attributes
having attributes_follow
set
to false.
SFS allows potentially untrusted users to formulate NFS requests (though of course SFS requires file handles to decrypt correctly and stamps the request with the appropriate Unix uid/gid credentials). This may let bad users crash your server's kernel (or worse). Similarly, bad servers may be able to crash a client.
As a precaution, you may want to be careful about exporting any portion
of a file system to anonymous users with the R
or W
options to Export
(see export). When analyzing your NFS code
for security, you should know that even anonymous users can make the
following NFS RPC's on a local-directory in your
sfsrwsd_config
file: NFSPROC3_GETATTR
,
NFSPROC3_ACCESS
, NFSPROC3_FSINFO
, and
NFSPROC3_PATHCONF
.
On the client side, a bad, non-root user in collusion with a bad file server can possibly crash or deadlock the machine. Many NFS client implementations have inadequate locking that could lead to race conditions. Other implementations make assumptions about the hierarchical nature of a file system served by the server. By violating these assumptions (for example having two directories on a server each contain the other), a user may be able to deadlock the client and create unkillable processes.
logger
buffer overrunSFS pipes log messages through the logger
program to get them
into the system log. SFS can generate arbitrarily long lines. If your
logger
does something stupid like call gets
, it may
suffer a buffer overrun. We assume no one does this, but feel the point
is worth mentioning, since not all logger programs come with source.
To avoid using logger
, you can run sfscd
and
sfssd
with the -d
flag and redirect standard error
wherever you wish manually.
The best way to attack the SFS software is probably to cause resource exhaustion. You can try to run SFS out of file descriptors, memory, CPU time, or mount points.
An attacker can run a server out of file descriptors by opening many
parallel TCP connections. Such attacks can be detected using the
netstat
command to see who is connecting to SFS (which accepts
connections on port 4). Users can run the client (also
sfsauthd
) out of descriptors by connecting many times using
the setgid program
/usr/local/lib/sfs-0.5/suidconnect
. These attacks
can be traced using a tool like lsof, available from
ftp://vic.cc.purdue.edu/pub/tools/unix/lsof.
SFS enforces a maximum size of just over 64 K on all RPC requests. Nonetheless, a client could connect 1000 times, on each connection send the first 64 K of a slightly larger message, and just sit there. That would obviously consume about 64 Megabytes of memory, as SFS will wait patiently for the rest of the request.
A worse problem is that SFS servers do not currently flow-control clients. Thus, an attacker could make many RPCs but not read the replies, causing the SFS server to buffer arbitrarily much data and run out of memory. (Obviously the server eventually flushes any buffered data when the TCP connection closes.)
Connecting to an SFS server costs the server tens of milliseconds of CPU time. An attacker can try to burn a huge amount of the server's CPU time by connecting to the server many times. The effects of such attacks can be mitigated using hashcash, HashCost.
Finally, a user on a client can cause a large number of file systems to be mounted. If the operating system has a limit on the number of mount points, a user could run the client out of mount points.
If a TCP connection is reset, the SFS client will attempt to reconnect
to the server and retransmit whatever RPCs were pending at the time the
connection dropped. Not all NFS RPCs are idempotent however. Thus, an
attacker who caused a connection to reset at just the right time could,
for instance, cause a mkdir
command to return EEXIST
when in fact it did just create the directory.
SFS exchanges NFS traffic with the local operating system using the loopback interface. An attacker with physical access to the local ethernet may be able to inject arbitrary packets into a machine, including packets to 127.0.0.1. Without packet filtering in place, an attacker can also send packets from anywhere making them appear to come from 127.0.0.1.
On the client, an attacker can forge NFS requests from the kernel to SFS, or forge replies from SFS to the kernel. The SFS client encrypts file handles before giving them to the operating system. Thus, the attacker is unlikely to be able to forge a request from the kernel to SFS that contain a valid file handle. In the other direction however, the reply does not need to contain a file handle. The attacker may well be able to convince the kernel of a forged reply from SFS. The attacker only needs to guess a (possibly quite predictable) 32-bit RPC XID number. Such an attack could result, for example, in a user getting the wrong data when reading a file.
On the server side, you also must assume the attacker cannot guess a valid NFS file handle (otherwise, you already have no security--see NFS security). However, the attacker might again forge NFS replies, this time from the kernel to the SFS server software.
To prevent such attacks, if your operating system has IP filtering, it would be a good idea to block any packets either from or to 127.0.0.1 if those packets do not come from the loopback interface. Blocking traffic "from" 127.0.0.1 at your firewall is also a good idea.
On BSD-based systems (and possibly others) the buffer reclaiming policy can cause deadlock. When an operation needs a buffer and there are no clean buffers available, the kernel picks some particular dirty buffer and won't let the operation complete until it can get that buffer. This can lead to deadlock in the case that two machines mount each other.
An attacker may be able to read the contents of a private file shortly after you log out of a public workstation if the he can then become root on the workstation. There are two attacks possible.
First, the attacker may be able to read data out of physical memory or
from the swap partition of the local disk. File data may still be in
memory if the kernel's NFS3 code has cached it in the buffer cache.
There may also be fragments of file data in the memory of the
sfsrwcd
process, or out on disk in the swap partition (though
sfsrwcd
does its best to avoid getting paged out). The
attacker can read any remaining file contents once he gains control of
the machine.
Alternatively, the attacker may have recorded encrypted session traffic
between the client and server. Once he gains control of the client
machine, he can attach to the sfsrwcd
process with the
debugger and learn the session key if the session is still open. This
will let him read the session he recorded in encrypted form.
To minimize the risks of these attacks, you must kill and restart
sfscd
before turning control of a public workstation over to
another user. Even this is not guaranteed to fix the problem. It will
flush file blocks from the buffer cache by unmounting all file systems,
for example, but the contents of those blocks may persist as
uninitialized data in buffers sitting on the free list. Similarly, any
programs you ran that manipulated private file data may have gotten
paged out to disk, and the information may live on after the processes
exit.
In conclusion, if you are paranoid, it is best not to use public workstations.
SFS does its best to disable setuid programs and devices on remote file servers it mounts. However, we have only tested this on operating systems we have access to. When porting SFS to new platforms, It is worth testing that both setuid programs and devices do not work over SFS. Otherwise, any user of an SFS client can become root.
Please report any bugs you find in SFS to sfsbug@redlab.lcs.mit.edu.
You can send mail to the authors of SFS at sfs-dev@pdos.lcs.mit.edu.
There is also a mailing list of SFS users and developers at sfs@sfs.fs.net. To subscribe to the list, send mail to sfs-subscribe@sfs.fs.net.
/etc/exports
: Quick server setup
configure
: Building
dirsearch
: sfskey
EDEADLK
: sfscd
nfsmounter
: System overview
Resource deadlock avoided
: sfscd
sfs_config
: sfs_config
sfs_srp_params
: sfs_srp_params
sfs_users
: sfs_users
sfsauthd_config
: sfsauthd_config
sfscd_config
: sfscd_config
sfsrwsd_config
: sfsrwsd_config
sfssd_config
: sfssd_config
___gmp_default_allocate
referenced from text segment: Build Problems