预计阅读本页时间:-
eval
We have seen that quoting lets you skip steps in command-line processing. Then there's the eval command, which lets you go through the process again. Performing command-line processing twice may seem strange, but it's actually very powerful: it lets you write scripts that create command strings on the fly and then pass them to the shell for execution. This means that you can give scripts "intelligence" to modify their own behavior as they are running.
The eval statement tells the shell to take eval's arguments and run them through the command-line processing steps all over again. To help you understand the implications of eval, we'll start with a trivial example and work our way up to a situation in which we're constructing and running commands on the fly.
广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元
eval ls passes the string ls to the shell to execute; the shell prints a list of files in the current directory. Very simple; there is nothing about the string ls that needs to be sent through the command-processing steps twice. But consider this:
listpage="ls | more"
$listpage
Instead of producing a paginated file listing, the shell will treat | and more as arguments to ls, and ls will complain that no files of those names exist. Why? Because the pipe character "appears" in Step 6 when the shell evaluates the variable, after it has actually looked for pipe characters. The variable's expansion isn't even parsed until Step 9. As a result, the shell will treat | and more as arguments to ls, so that ls will try to find files called | and more in the current directory!
Now consider eval $listpage instead of just $listpage. When the shell gets to the last step, it will run the command eval with arguments ls, |, and more. This causes the shell to go back to Step 1 with a line that consists of these arguments. It finds | in Step 2 and splits the line into two commands, ls and more. Each command is processed in the normal (and in both cases trivial) way. The result is a paginated list of the files in your current directory.
Now you may start to see how powerful eval can be. It is an advanced feature that requires considerable programming cleverness to be used most effectively. It even has a bit of the flavor of artificial intelligence, in that it enables you to write programs that can "write" and execute other programs.[14] You probably won't use eval for everyday shell programming, but it's worth taking the time to understand what it can do.
As a more interesting example, we'll revisit Task 4-1, the very first task in the book. In it, we constructed a simple pipeline that sorts a file and prints out the first N lines, where N defaults to 10. The resulting pipeline was:
sort -nr $1 | head -${2:-10}
The first argument specified the file to sort; $2 is the number of lines to print.
Now suppose we change the task just a bit so that the default is to print the entire file instead of 10 lines. This means that we don't want to use head at all in the default case. We could do this in the following way:
if [ -n "$2" ]; then
sort -nr $1 | head -$2
else
sort -nr $1
fi
In other words, we decide which pipeline to run according to whether $2 is null. But here is a more compact solution:
eval sort -nr \$1 ${2:+"| head -\$2"}
The last expression in this line evaluates to the string | head -\$2 if $2 exists (is not null); if $2 is null, then the expression is null too. We backslash-escape dollar signs (\$) before variable names to prevent unpredictable results if the variables' values contain special characters like > or |. The backslash effectively puts off the variables' evaluation until the eval command itself runs. So the entire line is either:
eval sort -nr \$1 | head -\$2
if $2 is given, or:
eval sort -nr \$1
if $2 is null. Once again, we can't just run this command without eval because the pipe is "uncovered" after the shell tries to break the line up into commands. eval causes the shell to run the correct pipeline when $2 is given.
Next, we'll revisit Task 7-2 from earlier in this chapter, the start script that lets you start a command in the background and save its standard output and standard error in a logfile. Recall that the one-line solution to this task had the restriction that the command could not contain output redirectors or pipes. Although the former doesn't make sense when you think about it, you certainly would want the ability to start a pipeline in this way.
eval is the obvious way to solve this problem:
eval "$@" > logfile 2>&1 &
The only restriction that this imposes on the user is that pipes and other such special characters be quoted (surrounded by quotes or preceded by backslashes).
Here's a way to apply eval in conjunction with various other interesting shell programming concepts.
Task 7-3
Implement the core of the make utility as a shell script.
make is known primarily as a programmer's tool, but it seems as though someone finds a new use for it every day. Without going into too much extraneous detail, make basically keeps track of multiple files in a particular project, some of which depend on others (e.g., a document depends on its word processor input file(s)). It makes sure that when you change a file, all of the other files that depend on it are processed.
For example, assume you're using the troff word processor to write a book. You have files for the book's chapters called ch1.t, ch2.t, and so on; the troff output for these files are ch1.out, ch2.out, etc. You run commands like troff ch N .t > ch N .out to do the processing. While you're working on the book, you tend to make changes to several files at a time.
In this situation, you can use make to keep track of which files need to be reprocessed, so that all you need to do is type make, and it will figure out what needs to be done. You don't need to remember to reprocess the files that have changed.
How does make do this? Simple: it compares the modification times of the input and output files (called sources and targets in make terminology), and if the input file is newer, then make reprocesses it.
You tell make which files to check by building a file called makefile that has constructs like this:
target : source1 source2 ...
commands to make target
This essentially says, "For target to be up to date, it must be newer than all of the sources. If it's not, run the commands to bring it up to date." The commands are on one or more lines that must start with TABs: e.g., to make ch7.out:
ch7.out : ch7.t
troff ch7.t > ch7.out
Now suppose that we write a shell function called makecmd that reads and executes a single construct of this form. Assume that the makefile is read from standard input. The function would look like the following code.
makecmd ( )
{
read target colon sources
for src in $sources; do
if [ $src -nt $target ]; then
while read cmd && [ $(grep \t* $cmd) ]; do
echo "$cmd"
eval ${cmd#\t}
done
break
fi
done
}
This function reads the line with the target and sources; the variable colon is just a placeholder for the :. Then it checks each source to see if it's newer than the target, using the -nt file attribute test operator that we saw in Chapter 5. If the source is newer, it reads, prints, and executes the commands until it finds a line that doesn't start with a TAB or it reaches end-of-file. (The real make does more than this; see the exercises at the end of this chapter.) After running the commands (which are stripped of the initial TAB), it breaks out of the for loop, so that it doesn't run the commands more than once.
As a final example of eval, we'll look again at procimage, the graphics utility that we developed in the last three chapters. Recall that one of the problems with the script as it stands is that it performs the process of scaling and bordering regardless of whether you want them. If no command-line options are present, a default size, border width, and border color are used. Rather than invent some if then logic to get around this, we'll look at how you can dynamically build a pipeline of commands in the script; those commands that aren't needed simply disappear when the time comes to execute them. As an added bonus, we'll add another capability to our script: image enhancement.
Looking at the procimage script you'll notice that the NetPBM commands form a nice pipeline; the output of one operation becomes the input to the next, until we end up with the final image. If it weren't for having to use a particular conversion utility, we could reduce the script to the following pipeline (ignoring options for now):
cat $filename | convertimage | pnmscale | pnmmargin |\
pnmtojpeg > $outfile
Or, better yet:
convertimage $filename | pnmscale | pnmmargin | pnmtojpeg \
> $outfile
As we've already seen, this is equivalent to:
eval convertimage $filename | pnmscale | pnmmargin |\
pnmtojpeg > $outfile
And knowing what we do about how eval operates, we can transform this into:
eval "convertimage" $filename " | pnmscale" " | pnmmargin" \
" | pnmtojpeg " > $outfile
And thence to:
convert='convertimage'
scale=' | pnmscale'
border=' | pnmmargin'
standardise=' | pnmtojpeg
eval $convert $filename $scale $border $standardise > $outfile
Now consider what happens when we don't want to scale the image. We do this:
scale=""
while getopts ":s:w:c:" opt; do
case $opt in
s ) scale=' | pnmscale' ;;
...
eval $convert $filename $scale $border $standardise > $outfile
In this code fragment, scale is set to a default of the empty string. If -s is not given on the command line, then the final line evaluates with $scale as the empty string and the pipeline will "collapse" into:
$convert $filename $border $standardise > $outfile
Using this principle, we can modify the previous version of the procimage script and produce a pipeline version. For each input file we need to construct and run a pipeline based upon the options given on the command line. Here is the new version:
# Set up the defaults
width=1
colour='-color grey'
usage="Usage: $0 [-s N] [-w N] [-c S] imagefile..."
# Initialise the pipeline components
standardise=' | pnmtojpeg -quiet'
while getopts ":s:w:c:" opt; do
case $opt in
s ) size=$OPTARG
scale=' | pnmscale -quiet -xysize $size $size' ;;
w ) width=$OPTARG
border=' | pnmmargin $colour $width' ;;
c ) colour="-color $OPTARG"
border=' | pnmmargin $colour $width' ;;
\? ) echo $usage
exit 1 ;;
esac
done
shift $(($OPTIND - 1))
if [ -z "$@" ]; then
echo $usage
exit 1
fi
# Process the input files
for filename in "$@"; do
case $filename in
*.gif ) convert='giftopnm' ;;
*.tga ) convert='tgatoppm' ;;
*.xpm ) convert='xpmtoppm' ;;
*.pcx ) convert='pcxtoppm' ;;
*.tif ) convert='tifftopnm' ;;
*.jpg ) convert='jpegtopnm -quiet' ;;
* ) echo "$0: Unknown filetype '${filename##*.}'"
exit 1;;
esac
outfile=${filename%.*}.new.jpg
eval $convert $filename $scale $border $standardise > $outfile
done
This version has been simplified somewhat from the previous one in that it no longer needs a temporary file to hold the converted file. It is also a lot easier to read and understand. To show how easy it is to add further processing to the script, we'll now add one more NetPBM utility.
NetPBM provides a utility to enhance an image and make it sharper: pnmnlfilt. This utility is an image filter that samples the image and can enhance edges in the image (it can also smooth the image if given the appropriate values). It takes two parameters that tell it how much to enhance the image. For the purposes of our script, we'll just choose some optimal values and provide an option to switch enhancement on and off in the script.
To put the new capability in place all we have to do is add the new option (-S) to the getopts case statement, update the usage line, and add a new variable to the pipeline. Here is the new code:
# Set up the defaults
width=1
colour='-color grey'
usage="Usage: $0 [-S] [-s N] [-w N] [-c S] imagefile..."
# Initialise the pipeline components
standardise=' | pnmtojpeg -quiet'
while getopts ":Ss:w:c:" opt; do
case $opt in
S ) sharpness=' | pnmnlfilt -0.7 0.45' ;;
s ) size=$OPTARG
scale=' | pnmscale -quiet -xysize $size $size' ;;
w ) width=$OPTARG
border=' | pnmmargin $colour $width' ;;
c ) colour="-color $OPTARG"
border=' | pnmmargin $colour $width' ;;
\? ) echo $usage
exit 1 ;;
esac
done
shift $(($OPTIND - 1))
if [ -z "$@" ]; then
echo $usage
exit 1
fi
# Process the input files
for filename in "$@"; do
case $filename in
*.gif ) convert='giftopnm' ;;
*.tga ) convert='tgatoppm' ;;
*.xpm ) convert='xpmtoppm' ;;
*.pcx ) convert='pcxtoppm' ;;
*.tif ) convert='tifftopnm' ;;
*.jpg ) convert='jpegtopnm -quiet' ;;
* ) echo "$0: Unknown filetype '${filename##*.}'"
exit 1;;
esac
outfile=${filename%.*}.new.jpg
eval $convert $filename $scale $border $sharpness $standardise > $outfile
done
We could go on forever with increasingly complex examples of eval, but we'll settle for concluding the chapter with a few exercises. The questions in Exercise 3 are really more like items on the menu of food for thought.
- Here are a couple of ways to enhance procimage, the graphics utility:
- Add an option, -q, that allows the user to turn on and off the printing of diagnostic information from the NetPBM utilities. You'll need to map -q to the -quiet option of the utilities. Also, add your own diagnostic output for those utilities that don't print anything, e.g., the format conversions.
- Add an option that allows the user to specify the order that the NetPBM processes take place, i.e., whether enhancing the image comes before bordering, or bordering comes before resizing. Rather than using an if construct to make the choice amongst hard-coded orders, construct a string dynamically which will look similar to this:
"eval $convert $filename $scale $border $sharpness
$standardise > $outfile" - You'll then need eval to evaluate this string.
- The function makecmd in the solution to Task 7-3 represents an oversimplification of the real make's functionality. make actually checks file dependencies recursively, meaning that a source on one line in a makefile can be a target on another line. For example, the book chapters in the example could themselves depend on some figures in separate files that were made with a graphics package.
- Write a function called readtargets that goes through the makefile and stores all of the targets in a variable or temporary file.
- makecmd merely checks to see if any of the sources are newer than the given target. It should really be a recursive routine that looks like this:
function makecmd ( )
{
target=$1
get sources for $target
for each source src; do
if $src is also a target in this makefile then
makecmd $src
fi
if [ $src -nt $target ]; then
run commands to make target
return
fi
done
} - Implement this.
- Write the "driver" script that turns the makecmd function into a full make program. This should make the target given as argument, or if none is given, the first target listed in the makefile.
- The above makecmd still doesn't do one important thing that the real make does: allow for "symbolic" targets that aren't files. These give make much of the power that makes it applicable to such an incredible variety of situations. Symbolic targets always have a modification time of 0, so that make always runs the commands to make them. Modify makecmd so that it allows for symbolic targets. (Hint: the crux of this problem is to figure out how to get a file's modification time. This is quite difficult.)
- Here are some problems that really test your knowledge of eval and the shell's command-line processing rules. Solve these and you're a true bash hacker!
- Advanced shell programmers sometimes use a little trick that includes eval: using the value of a variable as the name of another variable. In other words, you can give a shell script control over the names of variables to which it assigns values. The latest version of bash has this built in in the form of ${! varname}, where varname contains the name of another variable that will be the target of the operation. This is known as indirect expansion. How would you do this only using eval? (Hint: if $object equals "person", and $person is "alice", then you might think that you could type echo $$object and get the response alice. This doesn't actually work, but it's on the right track.)
- You could use the above technique together with other eval tricks to implement new control structures for the shell. For example, see if you can write a script that emulates the behavior of a for loop in a conventional language like C or Pascal, i.e., a loop that iterates a fixed number of times, with a loop variable that steps from 1 to the number of iterations (or, for C fans, 0 to iterations-1). Call your script loop to avoid clashes with the keywords for and do.
- The pushd, popd, and dirs functions that we built up in previous chapters can't handle directories with spaces in their names (because DIR_STACK uses a space as a delimiter). Use eval to overcome this limitation. (Hint: use eval to implement an array. Each array element is called array1, array2, ... arrayn, and each array element contains a directory name.)
- (The following doesn't have that much to do with the material in this chapter per se, but it is a classic programming exercise:) Write the function alg2rpn used in the section on command blocks. Here's how to do this: Arithmetic expressions in algebraic notation have the form expr op expr, where each expr is either a number or another expression (perhaps in parentheses), and op is +, -, x, /, or % (remainder). In RPN, expressions have the form expr expr op. For example: the algebraic expression 2+3 is 2 3 + in RPN; the RPN equivalent of (2+3) x (9-5) is 2 3 + 9 5 - x. The main advantage of RPN is that it obviates the need for parentheses and operator precedence rules (e.g., x is evaluated before +). The dc program accepts standard RPN, but each expression should have "p" appended to it, which tells dc to print its result; e.g., the first example above should be given to dc as 2 3 + p.
- You need to write a routine that converts algebraic notation to RPN. This should be (or include) a function that calls itself (a recursive function) whenever it encounters a subexpression. It is especially important that this function keep track of where it is in the input string and how much of the string it "eats up" during its processing. (Hint: make use of the pattern-matching operators discussed in Chapter 4 to ease the task of parsing input strings.) To make your life easier, don't worry about operator precedence for now; just convert to RPN from left to right: e.g., treat 3+4x5 as (3+4)x5 and 3x4+5 as (3x4)+5. This makes it possible for you to convert the input string on the fly, i.e., without having to read in the whole thing before doing any processing.
- Enhance your solution to the previous exercise so that it supports operator precedence in the "usual" order: x, /, % (remainder) +, -. For example, treat 3+4x5 as 3+(4x5) and 3x4+5 as (3x4)+5.
- Here is something else to really test your skills; write a graphics utility script, index, that takes a list of image files, reduces them in size and creates an "index" image. An index image is comprised of thumbnail-sized versions of the original images, placed neatly in columns and rows, and with a caption underneath (usually the name of the original file). Besides the list of files, you'll need some options, including the number of columns to create and the size of the thumbnail images. You might also like to include an option to specify the gap between each image. The new NetPBM utilities you'll need are pbmtext and pnmcat. You'll also need pnmscale and one or more of the conversion utilities, depending upon whether you decide to take in various formats (as we did for procimage) and what output format you decide on. pbmtext takes as an argument some text and converts the text into a PNM bitmap. pnmcat is a little more complex. Like cat, it concatenates things; in this case, images. You can specify as many PNM files as you like as arguments and pnmcat will put them together into one long image. By using the -lr and -tb options, you can specify whether you want the images to be placed one after the other going from left to right, or from top to bottom. The first option to pnmcat is the background color. It can be either -black for a black background, or -white for a white background. We suggest -white to match the pbmtext black text on a white background. You'll need to take each file, run the filename through pbmtext, and use pnmcat to place it underneath a scaled down version of the original image. Then you'll need to continue doing this for each file and use pnmcat to connect them together. In addition, you'll have to keep tabs on how many columns you have completed and when to start a new row. Note that you'll need to build up the rows individually and use pnmcat to connect them together. pnmcat won't do this for you automatically.
[7] Two obscure variations on this: the shell substitutes the current directory ($PWD) for ~+ and the previous directory ($OLDPWD) for ~-. In bash 2.0 there are two more: ~N+ and ~N-. These are replaced by the corresponding element in the directory stack as given by the dirs command.
[8] However, as we saw in Chapter 1, `\'' (i.e., single quote, backslash, single quote, single quote) acts pretty much like a single quote in the middle of a single-quoted string; e.g., `abc`\'`def' evaluates to abc`def.
[9] command removes alias lookup as a side effect. Because the first argument of command is no longer the first word that bash parses, it is not subjected to alias lookup.
[10] Unless bash has been compiled with a brain-dead value for the default. See Chapter 11 for how to change the default value.
[11] Note that the wrong test may still be run. If your current directory is the last in PATH you'll probably execute the system file test. test is not a good name for a program.
[12] The -d, -f, -p, and -s options are not available in versions of bash prior to 2.0.
[13] Be careful—it is possible to disable enable (enable -n enable). There is a compile-time option that allows builtin to act as an escape-hatch. For more details, see Chapter 11.
[14] You could actually do this without eval, by echoing commands to a temporary file and then "sourcing" that file with . filename. But that is much less efficient.