Coder Social home page Coder Social logo

script's Introduction

Go Reference Go Report Card Mentioned in Awesome Go CI Audit

import "github.com/bitfield/script"

Magical gopher logo

What is script?

script is a Go library for doing the kind of tasks that shell scripts are good at: reading files, executing subprocesses, counting lines, matching strings, and so on.

Why shouldn't it be as easy to write system administration programs in Go as it is in a typical shell? script aims to make it just that easy.

Shell scripts often compose a sequence of operations on a stream of data (a pipeline). This is how script works, too.

This is one absolutely superb API design. Taking inspiration from shell pipes and turning it into a Go library with syntax this clean is really impressive.
β€”Simon Willison

Read more: Scripting with Go

Quick start: Unix equivalents

If you're already familiar with shell scripting and the Unix toolset, here is a rough guide to the equivalent script operation for each listed Unix command.

Unix / shell script equivalent
(any program name) Exec
[ -f FILE ] IfExists
> WriteFile
>> AppendFile
$* Args
basename Basename
cat File / Concat
curl Do / Get / Post
cut Column
dirname Dirname
echo Echo
find FindFiles
grep Match / MatchRegexp
grep -v Reject / RejectRegexp
head First
jq JQ
ls ListFiles
sed Replace / ReplaceRegexp
sha256sum SHA256Sum / SHA256Sums
tail Last
tee Tee
uniq -c Freq
wc -l CountLines
xargs ExecForEach

Some examples

Let's see some simple examples. Suppose you want to read the contents of a file as a string:

contents, err := script.File("test.txt").String()

That looks straightforward enough, but suppose you now want to count the lines in that file.

numLines, err := script.File("test.txt").CountLines()

For something a bit more challenging, let's try counting the number of lines in the file that match the string Error:

numErrors, err := script.File("test.txt").Match("Error").CountLines()

But what if, instead of reading a specific file, we want to simply pipe input into this program, and have it output only matching lines (like grep)?

script.Stdin().Match("Error").Stdout()

Just for fun, let's filter all the results through some arbitrary Go function:

script.Stdin().Match("Error").FilterLine(strings.ToUpper).Stdout()

That was almost too easy! So let's pass in a list of files on the command line, and have our program read them all in sequence and output the matching lines:

script.Args().Concat().Match("Error").Stdout()

Maybe we're only interested in the first 10 matches. No problem:

script.Args().Concat().Match("Error").First(10).Stdout()

What's that? You want to append that output to a file instead of printing it to the terminal? You've got some attitude, mister. But okay:

script.Args().Concat().Match("Error").First(10).AppendFile("/var/log/errors.txt")

And if we'd like to send the output to the terminal as well as to the file, we can do that:

script.Echo("data").Tee().AppendFile("data.txt")

We're not limited to getting data only from files or standard input. We can get it from HTTP requests too:

script.Get("https://wttr.in/London?format=3").Stdout()
// Output:
// London: 🌦   +13°C

That's great for simple GET requests, but suppose we want to send some data in the body of a POST request, for example. Here's how that works:

script.Echo(data).Post(URL).Stdout()

If we need to customise the HTTP behaviour in some way, such as using our own HTTP client, we can do that:

script.NewPipe().WithHTTPClient(&http.Client{
	Timeout: 10 * time.Second,
}).Get("https://example.com").Stdout()

Or maybe we need to set some custom header on the request. No problem. We can just create the request in the usual way, and set it up however we want. Then we pass it to Do, which will actually perform the request:

req, err := http.NewRequest(http.MethodGet, "http://example.com", nil)
req.Header.Add("Authorization", "Bearer "+token)
script.Do(req).Stdout()

The HTTP server could return some non-okay response, though; for example, β€œ404 Not Found”. So what happens then?

In general, when any pipe stage (such as Do) encounters an error, it produces no output to subsequent stages. And script treats HTTP response status codes outside the range 200-299 as errors. So the answer for the previous example is that we just won't see any output from this program if the server returns an error response.

Instead, the pipe β€œremembers” any error that occurs, and we can retrieve it later by calling its Error method, or by using a sink method such as String, which returns an error value along with the result.

Stdout also returns an error, plus the number of bytes successfully written (which we don't care about for this particular case). So we can check that error, which is always a good idea in Go:

_, err := script.Do(req).Stdout()
if err != nil {
	log.Fatal(err)
}

If, as is common, the data we get from an HTTP request is in JSON format, we can use JQ queries to interrogate it:

data, err := script.Do(req).JQ(".[0] | {message: .commit.message, name: .commit.committer.name}").String()

We can also run external programs and get their output:

script.Exec("ping 127.0.0.1").Stdout()

Note that Exec runs the command concurrently: it doesn't wait for the command to complete before returning any output. That's good, because this ping command will run forever (or until we get bored).

Instead, when we read from the pipe using Stdout, we see each line of output as it's produced:

PING 127.0.0.1 (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.056 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms
...

In the ping example, we knew the exact arguments we wanted to send the command, and we just needed to run it once. But what if we don't know the arguments yet? We might get them from the user, for example.

We might like to be able to run the external command repeatedly, each time passing it the next line of data from the pipe as an argument. No worries:

script.Args().ExecForEach("ping -c 1 {{.}}").Stdout()

That {{.}} is standard Go template syntax; it'll substitute each line of data from the pipe into the command line before it's executed. You can write as fancy a Go template expression as you want here (but this simple example probably covers most use cases).

If there isn't a built-in operation that does what we want, we can just write our own, using Filter:

script.Echo("hello world").Filter(func (r io.Reader, w io.Writer) error {
	n, err := io.Copy(w, r)
	fmt.Fprintf(w, "\nfiltered %d bytes\n", n)
	return err
}).Stdout()
// Output:
// hello world
// filtered 11 bytes

The func we supply to Filter takes just two parameters: a reader to read from, and a writer to write to. The reader reads the previous stages of the pipe, as you might expect, and anything written to the writer goes to the next stage of the pipe.

If our func returns some error, then, just as with the Do example, the pipe's error status is set, and subsequent stages become a no-op.

Filters run concurrently, so the pipeline can start producing output before the input has been fully read, as it did in the ping example. In fact, most built-in pipe methods, including Exec, are implemented using Filter.

If we want to scan input line by line, we could do that with a Filter function that creates a bufio.Scanner on its input, but we don't need to:

script.Echo("a\nb\nc").FilterScan(func(line string, w io.Writer) {
	fmt.Fprintf(w, "scanned line: %q\n", line)
}).Stdout()
// Output:
// scanned line: "a"
// scanned line: "b"
// scanned line: "c"

And there's more. Much more. Read the docs for full details, and more examples.

A realistic use case

Let's use script to write a program that system administrators might actually need. One thing I often find myself doing is counting the most frequent visitors to a website over a given period of time. Given an Apache log in the Common Log Format like this:

212.205.21.11 - - [30/Jun/2019:17:06:15 +0000] "GET / HTTP/1.1" 200 2028 "https://example.com/ "Mozilla/5.0 (Linux; Android 8.0.0; FIG-LX1 Build/HUAWEIFIG-LX1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.156 Mobile Safari/537.36"

we would like to extract the visitor's IP address (the first column in the logfile), and count the number of times this IP address occurs in the file. Finally, we might like to list the top 10 visitors by frequency. In a shell script we might do something like:

cut -d' ' -f 1 access.log |sort |uniq -c |sort -rn |head

There's a lot going on there, and it's pleasing to find that the equivalent script program is quite brief:

package main

import (
	"github.com/bitfield/script"
)

func main() {
	script.Stdin().Column(1).Freq().First(10).Stdout()
}

Let's try it out with some sample data:

16 176.182.2.191
 7 212.205.21.11
 1 190.253.121.1
 1 90.53.111.17

Documentation

See pkg.go.dev for the full documentation, or read on for a summary.

Sources

These are functions that create a pipe with a given contents:

Source Contents
Args command-line arguments
Do HTTP response
Echo a string
Exec command output
File file contents
FindFiles recursive file listing
Get HTTP response
IfExists do something only if some file exists
ListFiles file listing (including wildcards)
Post HTTP response
Slice slice elements, one per line
Stdin standard input

Filters

Filters are methods on an existing pipe that also return a pipe, allowing you to chain filters indefinitely. The filters modify each line of their input according to the following rules:

Filter Results
Basename removes leading path components from each line, leaving only the filename
Column Nth column of input
Concat contents of multiple files
Dirname removes filename from each line, leaving only leading path components
Do response to supplied HTTP request
Echo all input replaced by given string
Exec filtered through external command
ExecForEach execute given command template for each line of input
Filter user-supplied function filtering a reader to a writer
FilterLine user-supplied function filtering each line to a string
FilterScan user-supplied function filtering each line to a writer
First first N lines of input
Freq frequency count of unique input lines, most frequent first
Get response to HTTP GET on supplied URL
Join replace all newlines with spaces
JQ result of jq query
Last last N lines of input
Match lines matching given string
MatchRegexp lines matching given regexp
Post response to HTTP POST on supplied URL
Reject lines not matching given string
RejectRegexp lines not matching given regexp
Replace matching text replaced with given string
ReplaceRegexp matching text replaced with given string
SHA256Sums SHA-256 hashes of each listed file
Tee input copied to supplied writers

Note that filters run concurrently, rather than producing nothing until each stage has fully read its input. This is convenient for executing long-running commands, for example. If you do need to wait for the pipeline to complete, call Wait.

Sinks

Sinks are methods that return some data from a pipe, ending the pipeline and extracting its full contents in a specified way:

Sink Destination Results
AppendFile appended to file, creating if it doesn't exist bytes written, error
Bytes data as []byte, error
CountLines number of lines, error
Read given []byte bytes read, error
SHA256Sum SHA-256 hash, error
Slice data as []string, error
Stdout standard output bytes written, error
String data as string, error
Wait none
WriteFile specified file, truncating if it exists bytes written, error

What's new

Version New
v0.22.0 Tee, WithStderr
v0.21.0 HTTP support: Do, Get, Post
v0.20.0 JQ

Contributing

See the contributor's guide for some helpful tips if you'd like to contribute to the script project.

Links

Gopher image by MariaLetta

script's People

Contributors

antoin-m avatar bartdeboer avatar bitfield avatar frits-v avatar gty929 avatar jessp01 avatar joumanae avatar mainawycliffe avatar schow94 avatar smilingnavern avatar stuartherbert avatar thiagonache avatar thomaspoignant avatar toffaletti avatar udithprabhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

script's Issues

FEATURE: plug-in different Regex engines

Thank you for publishing this useful tool @bitfield.

Would it be possible to add support for plugging different Regex engines in script? GoLang's regexp package does not support PCRE and without that there is no look-ahead nor look-behind functionality without which some problems cannot be solved.

It would be great if script allowed for 1 or 2. Ideally both, otherwise maybe only 1 due to superior performance.

Errors not working the way I thought

So if I have a command that has an obvious error:

func main() {
	p := script.Exec("cat /etc/shels")
	Str, err := p.String()
	if err != nil {
		fmt.Println(Str, err)
	} else {
		fmt.Println(Str)
	}

}

I get the following output:

exit status 1

If I change the first line to:

p := script.Exec("cat /etc/shells")

I get the output from cat.

I looked at the code and when the exec it done it uses exec.CombinedOutput so I am wondering why I don't see the error from cat?

Create subpackage `unix`

Based on the discussion in #7 I would like to suggest a subpackage unix. The suggestion is to have functions in this package, which have the same names as the respective unix commands. This would allow to keep the API of script clean and high level.
At the same time this would provide an easy start for people, how are familiar with the unix commands.
Maybe some of the high level commands in the scriptpackage would even decide to use the unix low level commands for their implementation.

How to use the Pipes for a cli cmd confirmation

Am new to this package, and am trying to use this for the underlying mechanics of a testing framework for a cli I am building.

Currently I have this function.

func CreatePool(fromUser,sourceChain,symbol,ticker, nativeAmount,externalAmount string) {
	cmd := fmt.Sprintf("sifnodecli tx clp create-pool --from %v --sourceChain %v --symbol %v --ticker %v --nativeAmount %v --externalAmount %v", fromUser,sourceChain,symbol,ticker, nativeAmount,externalAmount )
	p := script.Exec(cmd)
	p.SetError(nil)
	output, err := p.String()
	if err != nil {
		Fail(err.Error())
	}
	fmt.Println(output)
}

However the issue I am having with this cmd is it then requires an input confirmation before executing and the above cancels the cmd. ie

confirm transaction before signing and broadcasting [y/N]: 

Are you able to tell me how I would also automatically pass a Y to said cmd?

Thanks

calc md5hash of file

there is always a use case to calculate md5hash of a file and compare it with others. Having a single liner will definitely help several use cases.

sinks.SHA256Sum puts the whole file in-memory

Hi πŸ‘‹

It seems that sinks.SHA256Sum() puts the whole file in-memory which may be an issue for hashing large files.
For example when adding a 1GB file to the test:

root@ef78500374de:/workspaces/script# go test
signal: killed
FAIL    github.com/bitfield/script      16.404s

[Feature Request] stderr

Yo!
I'm gonna start with an easy one because my golang was never great to begin with, and now I'm rusty :-D

I think you really need an stderr.
There are a lot of use cases. Differentiate between regular output and error output. Output logs to stderr vs data output on stdout, etc. etc.
It's also a very easy addition (again, rusty go).

Unless you have a strong opposition to this, I'll start working on a PR (still reading through your contributor guidelines).

Filters sort and uniq

First of all, this is a really interesting package, thanks for the effort.

In my scripts I often use the shell commands sort and uniq so I feel these two would be a good additions for this package.

File/directory operations

It's common to need to do things like make directories, make symlinks, rename files, and so on in shell scripts. Even though it doesn't quite fit in with the pipe metaphor, it might still be useful to have convenience versions of these in script.

(Rename is an interesting case, as a direct os.Rename isn't allowed in overlay filesystems like Docker containers. You have to do a copy followed by delete, which is by no means straightforward. A Docker-safe rename tool would be useful.)

Add a jq filter

Modern shell scripts interact often with a REST API or a command line program producing json (kubectl for example). Slicing and dicing the data returned by those APIs is often done with https://stedolan.github.io/jq/.

https://github.com/itchyny/gojq provides a go native implementation of jq.

I propose the following filter:

func (p *Pipe) JQ(query string) *Pipe

An example script:

script.Echo('{"foo": 128}'').JQ(".foo").Stdout()

should return "128" on stdout.

Or to show the IP addresses of the local interface lo a shell user will write:

ip -j a show  | jq '.[] | select(.ifname=="lo") | .addr_info[].local'

With an jq filter in script we can write:

 script.Exec("ip -j a show").JQ('.[] | select(.ifname=="lo") | .addr_info[].local').Stdout)

The same slicing and dicing of data can be done in pure go code, but the jq language provides a well documented and known DSL for this problem.

Sudo powers

It's very common to need sudo rights in scripts. We might consider a SudoExec method, for example, or perhaps just a Sudo filter which obtains sudo permissions, if possible, for all subsequent pipe operations.

Provide example how to extend script with own package

In #12 I suggested to add a separate sub package called unix with an API similar to the unix coreutils commands. This issue was closed with the suggestion to implement this in it's own package/library, separated from this package.

So I tried to figure out, what would be the best way to do this and therefore I wanted to reach out to you (@bitfield and all other followers of this package) to discuss, how this could be done.

My goal is to leverage this package, especially the Pipe and be able to combine (in an as nice as possible way) functions from this package (e.g. script.Freq()) with functions from an other package (e.g. unix.Head()).

I looked at the example in the README but this does not work in a seperate package, because you can not add new methods to an type defined in an other package.

So the question is, what is the best way to approach this?

Have the README link to godoc.org

I'd like the godoc to be the primary documentation, and rather than explain each method in the README, I'll combine the best of both sets into the doc comments, and have the README just link to the godoc.org page. I also want to make the examples executable examples, if possible.

awk filter ( split with columns outputs ), maybe later outputs to charts library

Hi, This project is great, I've got some idea

Split with columns outputs, with awk mechanism and regexp ?

Which it can set FS input delimiter and OFS output delimiter, specify which columns to output

This may later working directly send output to charts library? (generrate a graph maybe ) or csv

Here are some chart libraries:

  1. https://github.com/vdobler/chart
  2. https://github.com/chenjiandongx/go-echarts
  3. https://github.com/gizak/termui

I used to write this https://github.com/chinglinwen/k8s/blob/master/testing/httpload/funcs, which I think represent some usecases.

Provide HTTP sink and source

In todays scripting we often have to interact with HTTP APIs. Many APIs provide native Go SDKs, but there are lesser known APIs around without a proper SDK. We should provide a facility to provide similar HTTP capabilities to curl into the go scripting environment. Rolling a own HTTP client in go is not that hard, but it requires a lot of repetitive error checking. Today I wrote such a small script with github.com/bitfield/script and I missed build in HTTP capabilities alot.

I have seen #3. This PR only implements a GET function. Interacting even with the simplest REST APIs requires us to use other methods like POST or PUT. Many APIs requires us to set custom headers.

Therefore I propose the following sources:

// HTTPClient is an interface to allow the user to plugin alternative HTTP clients into the source.
// The HTTPClient interface is a subset of the methods provided by the http.Client
// We use an own interface with a minimal surface to allow make it easy to implement own customized clients.
// Customized clients are required to implement features like oauth2 or TLS client certificate authentication.
type HTTPClient interface {
	Do(r *http.Request) (*http.Response, error)
}

// HTTP executes the given http request with the default HTTP client from the http package. The response of that request,
// is processed by `process`. If process is nil, the default process function copies the body from the request to the pipe.
func HTTP(req *http.Request, process func(*http.Response) (io.Reader, error)) *Pipe {
	return HTTPWithClient(http.DefaultClient, req, process)
}

// HTTP executes the given http request with the given HTTPClient. The response of that request,
// is processed by `process`. If process is nil, the default process function copies the body from the request to the pipe.
func HTTPWithClient(client HTTPClient, req *http.Request, process func(*http.Response) (io.Reader, error)) *Pipe {
}

And the following sink:

// HTTPRequest creates a new HTTP request with the body set to the content of the pipe.
func (p *Pipe) HTTPRequest(method string, url string) (*http.Request, error) 

This allows us to simply post the content of files to remote URL like this:

func main() {
	req, err := script.Args().Concat().HTTPRequest(http.MethodPost, "https://httpbin.org/post")
	if err != nil {
		log.Fatal(err)
	}
	script.HTTP(req, nil).Stdout()
}

This is the equivalent of posting the contents of files via curl to a POST endpoint.

I explored this design in the PR #72.
I'm open for feedback.

Execute with an argument list

What is Exec doing under the hood? Is it invoking a shell, or is it parsing the string itself? And how can we use an explicit argument list and not a string?

Generally, I think a major shortcoming of the readme is that it does not clarify the security considerations of using this library. Shell is very insecure, and one of the major reasons not to use it is to have better security.

Update: I now see this is kind of a duplicate: #32

Run 3 bat with error

I have batch file call runall.bat this file will call 3 batches

Call 1.bat this bat file will download ftp files
Call 2.bat
Call 3.bat

I need to know how to prevent calling 2.bat if 1.bat get error.

And stop call 3.bat if 2.bat ger error
Finaly generate a log file shows process made .

EachLine only executed after Exec exits?

It appears that EachLine() is only called after a pipe is closed. This prevents using script to write daemon scripts.

If there's a way to perform the equivalent of:

dbus-monitor --system "interface=org.freedesktop.DBus.Properties" | \
    while read -r line; do
        printf "DBus event: %s\n" "$line"
    done

would it be possible to get an example? If not, can we change the behavior of EachLine() to process each line as it is received, or add an EachLineForever() function?

[Feature Request] Tee

It would be cool to have a way to write a buffer to multiple sinks, in cases where you want to intercept parts of the incoming data but still pipe the rest for other matches...

script.Stdin().Match("error").Tee(script.File("all_errors")).Match("foo").Stdout()
cat logs | grep error | tee all_errors | grep foo 

In this scenario Tee writes to the File whilst still piping the content to the next item on the pipe Match that eventually pipes to stdout or any other sink.

@bitfield WDYT?

Something like:

fun (*p Pipe) Tee(io.Writer) *p { ... }
// or 
fun (*p Pipe) Tee(p Pipe) *p { ... }

Nice way to handle env variable?

Some methods to set environment varialbes for single command, instead of this:

os.Setenv("XB_AUTHOR_DATE", dateStr)
os.Setenv("XB_COMMITTER_DATE", dateStr)

Maybe in old shell way:

XB_AUTHOR_DATE="2021-04-05" XB_COMMITTER_DATE="2021-04-05" xb_push xxxx

Get user input

It's common when writing scripts that install or configure things to need input from the user (at a minimum, something like 'Press Enter to continue'; at a maximum, to be able to prompt the user for input, with an optional default value, and return that value).

Let's use this issue to design how that would look, and I invite suggestions!

ListFiles() does not detect file?

Case

I have two files in separate folders and one empty folder. Here's output of tree

.
β”œβ”€β”€ a1
β”‚Β Β  └── 2.txt
β”œβ”€β”€ a\1
β”‚Β Β  └── 1.txt
└── empty

3 directories, 2 files

Code

package main

import (
	"fmt"
	"github.com/bitfield/script"
	"os"
)

func main() {
	s, _ := script.ListFiles(os.Args[len(os.Args) - 1]).String()
	fmt.Print(s)
}

Compiled binary list_files_test stored in $PATH

Test

  1. $ list_files_test a1
    Output: a1/2.txt
    Expected: a1/2.txt
  2. $ list_files_test a\\1
    Output: a1
    Expected: a\1/1.txt (or a\\1/1.txt)
    Maybe i misunderstood expected behavior?

Source of the problem

ListFiles() detect this input as glob by strings.ContainsAny(path, "[]^*?\\{}!"). However that is not glob for sure.
And i don't know is that expected or not.

Additional

This issue related and important to #55.
Let's assume i have input for FindFiles such as /some/abs/path/a\\1/*.txt (assuming we have same case as i provided at the beginning). By this input i would expect library will interpret folder a\\1 as root for filepath.Walk. Then this walking should find all txt files under folder a\\1. But with same way of detecting glob as in ListFiles function will take folder path as root and then will also find all txt files under folder like a1 from provided test case. However these files are not in our interests. (Of course assuming we want to check all directories match glob as well, not just file).
Yes, programm could produce expected output if i would check input string in main but this input is for just illustrating problem related to #55.

Workaround

Change way of glob detecting.

"Join" customizable separator

Hey, just came across this & thought of a use-case when dealing with converting certs/pems private keys/etc.:

Converting this:

-----BEGIN RSA PRIVATE KEY-----
MIICXQIBAAKBgQC+qUCd8K2bBq0l112W4jcjH0kmByIdpthnSCT/jf/XTQprS43B
dVa6H9wU8LUfZQ5w9Ecz+qRmcxtixMVCYSlAjZXLy74QhYHQdHZbGRhhZLIaN6UF
wti2V/CGsR5lJlRIi1GBDjxdUv3RVUH1ILfmVQT6ujpM52ofEsKnferZTwIDAQAB
AoGAGV5LyrgLYWUyBKbzPPA8hd/Ty8uHLorUoGlpAtfSAsOtbzlOUz9ZmspCbkbY
0qSPl1fpYXEoDrmiGzIzTPHAmytMp3kcw9G/hVfg49x1vWVYkGbrKlq0QvoO6jLp
LOvMOz7YD2DNmJ9Dqef8s/ziYDO3fUkQihdKJ0hj2gX9IIECQQDOISTJa0kRUcG6
/2czCYqLQpv65lkofd2h+tmUgWKVBo4+aQUlsa4ZSagxsFmvF28kYWXd98fVMfr3
ER8avbBDAkEA7MoPlqHIb9y1QnD4rHg87ITuhEXq524sXwFLWvzWvLgmAxB2ey9H
+mZjCvL1WXvpn7zqzcMhurnTtefzMh14BQJBAI8aH2neG5n0gmSKD2E1TIOluJgU
9uzPhOCBQDCDKqd/J51YV4R1uAJCSoxEe968jCJbo9bXwFnYGv0PW+K6sfUCQQDj
KRW7VImNhxb9HpPyIYeRABYyH0EztKYsnnlEWLtJYQBWgDyqALn0prTtlBd8OTvv
WrWHoGODVzKbmGHe+hZhAkAbobPF2robFYv4s40gzkFOP7kQb/xGTZiCSkIhugch
QrDwWNmwR3CYdrFCZWh3VJQhUCSQ3dE7PZCxzybf9kyH
-----END RSA PRIVATE KEY-----

to this:

-----BEGIN RSA PRIVATE KEY-----\nMIICXQIBAAKBgQC+qUCd8K2bBq0l112W4jcjH0kmByIdpthnSCT/jf/XTQprS43B\ndVa6H9wU8LUfZQ5w9Ecz+qRmcxtixMVCYSlAjZXLy74QhYHQdHZbGRhhZLIaN6UF\nwti2V/CGsR5lJlRIi1GBDjxdUv3RVUH1ILfmVQT6ujpM52ofEsKnferZTwIDAQAB\nAoGAGV5LyrgLYWUyBKbzPPA8hd/Ty8uHLorUoGlpAtfSAsOtbzlOUz9ZmspCbkbY\n0qSPl1fpYXEoDrmiGzIzTPHAmytMp3kcw9G/hVfg49x1vWVYkGbrKlq0QvoO6jLp\nLOvMOz7YD2DNmJ9Dqef8s/ziYDO3fUkQihdKJ0hj2gX9IIECQQDOISTJa0kRUcG6\n/2czCYqLQpv65lkofd2h+tmUgWKVBo4+aQUlsa4ZSagxsFmvF28kYWXd98fVMfr3\nER8avbBDAkEA7MoPlqHIb9y1QnD4rHg87ITuhEXq524sXwFLWvzWvLgmAxB2ey9H\n+mZjCvL1WXvpn7zqzcMhurnTtefzMh14BQJBAI8aH2neG5n0gmSKD2E1TIOluJgU\n9uzPhOCBQDCDKqd/J51YV4R1uAJCSoxEe968jCJbo9bXwFnYGv0PW+K6sfUCQQDj\nKRW7VImNhxb9HpPyIYeRABYyH0EztKYsnnlEWLtJYQBWgDyqALn0prTtlBd8OTvv\nWrWHoGODVzKbmGHe+hZhAkAbobPF2robFYv4s40gzkFOP7kQb/xGTZiCSkIhugch\nQrDwWNmwR3CYdrFCZWh3VJQhUCSQ3dE7PZCxzybf9kyH\n-----END RSA PRIVATE KEY-----

and back again.


Converting the "flattened" single-line key can be done like so:

script.File("example.flattened.pem").ReplaceRegexp(regexp.MustCompile(`\\n`), "\n").Stdout()

but converting the "unflattened" multi-line key into the single-line one is a little tricky (or I'm missing something clear, which is definitely possible)

  • We can't use ReplaceRegexp to convert \n to "", because ReplaceRegexp goes line-by-line while performing the regex and writes an \n at the end of each line again. So regex in general is effectively out
  • We can't use join because although it'll join together the lines, it converts them to spaces instead of a customizable separator key, e.g. \n:

for example:

script.File("example.flattenme.pem").Join().Stdout()

will produce

-----BEGIN RSA PRIVATE KEY----- MIICXQIBAAKBgQC+qUCd8K2bBq0l112W4jcjH0kmByIdpthnSCT/jf/XTQprS43B dVa6H9wU8LUfZQ5w9Ecz+qRmcxtixMVCYSlAjZXLy74QhYHQdHZbGRhhZLIaN6UF wti2V/CGsR5lJlRIi1GBDjxdUv3RVUH1ILfmVQT6ujpM52ofEsKnferZTwIDAQAB AoGAGV5LyrgLYWUyBKbzPPA8hd/Ty8uHLorUoGlpAtfSAsOtbzlOUz9ZmspCbkbY 0qSPl1fpYXEoDrmiGzIzTPHAmytMp3kcw9G/hVfg49x1vWVYkGbrKlq0QvoO6jLp LOvMOz7YD2DNmJ9Dqef8s/ziYDO3fUkQihdKJ0hj2gX9IIECQQDOISTJa0kRUcG6 /2czCYqLQpv65lkofd2h+tmUgWKVBo4+aQUlsa4ZSagxsFmvF28kYWXd98fVMfr3 ER8avbBDAkEA7MoPlqHIb9y1QnD4rHg87ITuhEXq524sXwFLWvzWvLgmAxB2ey9H +mZjCvL1WXvpn7zqzcMhurnTtefzMh14BQJBAI8aH2neG5n0gmSKD2E1TIOluJgU 9uzPhOCBQDCDKqd/J51YV4R1uAJCSoxEe968jCJbo9bXwFnYGv0PW+K6sfUCQQDj KRW7VImNhxb9HpPyIYeRABYyH0EztKYsnnlEWLtJYQBWgDyqALn0prTtlBd8OTvv WrWHoGODVzKbmGHe+hZhAkAbobPF2robFYv4s40gzkFOP7kQb/xGTZiCSkIhugch QrDwWNmwR3CYdrFCZWh3VJQhUCSQ3dE7PZCxzybf9kyH -----END RSA PRIVATE KEY-----% 

which is close! but now we have spaces to deal with. And regex replacing spaces with \n after the join pipe would catch the ---BEGIN RSA--- directives, too

I suppose EachLine could be used (though I haven't goofed with it yet) to write a custom join with separator...?

Alternatively, a JoinWithSeparator(string separator) might be beneficial to join lines together using a custom separator, where instead of only a space it joins on a literal \n, or "," etc

Thoughts / feedback welcome, but this is just a thought that came from goofing around, so if I'm missing something please let me know.

Thoughts?
And of course feel free to close whenever πŸ‘ thanks!

EDIT: Just tried with EachLine

	q := p.EachLine(func(line string, out *strings.Builder) {
		out.WriteString(line + "\\n")
	})

and got

-----BEGIN RSA PRIVATE KEY-----\nMIICXQIBAAKBgQC+qUCd8K2bBq0l112W4jcjH0kmByIdpthnSCT/jf/XTQprS43B\ndVa6H9wU8LUfZQ5w9Ecz+qRmcxtixMVCYSlAjZXLy74QhYHQdHZbGRhhZLIaN6UF\nwti2V/CGsR5lJlRIi1GBDjxdUv3RVUH1ILfmVQT6ujpM52ofEsKnferZTwIDAQAB\nAoGAGV5LyrgLYWUyBKbzPPA8hd/Ty8uHLorUoGlpAtfSAsOtbzlOUz9ZmspCbkbY\n0qSPl1fpYXEoDrmiGzIzTPHAmytMp3kcw9G/hVfg49x1vWVYkGbrKlq0QvoO6jLp\nLOvMOz7YD2DNmJ9Dqef8s/ziYDO3fUkQihdKJ0hj2gX9IIECQQDOISTJa0kRUcG6\n/2czCYqLQpv65lkofd2h+tmUgWKVBo4+aQUlsa4ZSagxsFmvF28kYWXd98fVMfr3\nER8avbBDAkEA7MoPlqHIb9y1QnD4rHg87ITuhEXq524sXwFLWvzWvLgmAxB2ey9H\n+mZjCvL1WXvpn7zqzcMhurnTtefzMh14BQJBAI8aH2neG5n0gmSKD2E1TIOluJgU\n9uzPhOCBQDCDKqd/J51YV4R1uAJCSoxEe968jCJbo9bXwFnYGv0PW+K6sfUCQQDj\nKRW7VImNhxb9HpPyIYeRABYyH0EztKYsnnlEWLtJYQBWgDyqALn0prTtlBd8OTvv\nWrWHoGODVzKbmGHe+hZhAkAbobPF2robFYv4s40gzkFOP7kQb/xGTZiCSkIhugch\nQrDwWNmwR3CYdrFCZWh3VJQhUCSQ3dE7PZCxzybf9kyH\n-----END RSA PRIVATE KEY-----\n

Yeah I suppose that'll work. A JoinWith("..") filter of some kind might be shorter, but EachLine can work fine too

WriteFile bug

the doc says that WriteFile() is equivalent to > .
but when I use WriteFile() with a file which contains content already, it will not truncate the file but will append it to the begining! overwriting only the bytes I write.
cat file.txt
result: ttttttttttt

script.Echo("zzz").WriteFile("file.txt")
result: zzztttttttt

I think we have to use O_TRUNC here

script/sinks.go

Line 112 in 32cf7a7

return p.writeOrAppendFile(fileName, os.O_RDWR|os.O_CREATE)

tested in windows 7 with golang v1.16.2

Directory Manipulations

As far as I can see, There is no implementations of directory manipulations. I can see that given API is more about pipelines and one liners, but I can assure that it will be quite handy time to time and can be useful for anyone who will use script.

I would like to suggest PR that will include two new and a must Unix equivalent functions:

  1. cd - change directory
  2. pwd - print working directory

List files inside a directory and its subdirectory

We should introduce a method that will list all files in a directory and its subdirectories and return a pipe just like the ListFiles Method. Probably the method should be able to filter hidden files using a showHidden argument. I am still yet to come up with a decent name for the method, I
am open to suggestions.

func FindFiles(path string, showHidden bool) *Pipe {
  // content
}

Improve stdout/stdin testing

Mat Ryer's idea of testing main() by having main() call run(os.Stdin, os.Stdout, os.Args) seems cleaner than monkey-patching os.Stdin and os.Stdout.

FindFiles and ListFiles might want to trim newlines from the path

Hi @bitfield , thanks for this awesome package. I just ran into an issue that is surely an easy thing to improve.

Issue: the output of Pipe.String() may include a newline. FindFiles and ListFiles cannot find a path that ends in a newline. Hence these two methods cannot consume the output of Pipe.String() without prior removal of newlines.

Suggestion: Have FindFiles and ListFiles trim trailing newlines from the input path.

Example:

package main

import (
	"log"
	"github.com/bitfield/script"
)

func main() {
	r, _ := script.Args().String()
	log.Printf("args: [%s]\n", r)
	p := script.ListFiles(r)
	_, err := p.Stdout()
	log.Println(err)
}

Call:

go run main.go .

Output:

2020/09/08 13:57:58 args: [.
]
2020/09/08 13:57:58 stat .
: no such file or directory

Severity: minor annoyance but can cost time to track down.

Closing of the pipe on String or Bytes?

Hello,

First, thanks for this project. Really like the API and design!

Instead of an issue, I will more ask a question. I am not sure to understand the strategy of closing for a Pipe when the sink is String() or Bytes().

For String() (https://github.com/bitfield/script/blob/master/sinks.go#L65), the documentation says:

String returns the contents of the Pipe as a string, or an error, and closes the pipe after reading

However, I cannot see in the code any reference to a close method. It only calls Bytes() which itself does not seem to close the reader. Am I missing something? Or is it a documentation mistake?

I can see in a previous commit that the close has been deleted. Is it on purpose?

062e038#diff-1d716f4c8f87036767ecdd0412ad974fL30

Thanks for the clarification!
Loric

HTTP

There has been some valuable proposal work and prototyping on HTTP support for script, notably in #3, #73 , and #72. However, I haven't yet seen a design that looks obvious, and that's how I want it to be: so simple that when you see it, you think "Of course! That's the only way it could work."

Opening this yet-another-issue to track a new approach to HTTP support: not "what could we reasonably include?", but "what could we not reasonably leave out?"

Stream pipes

I really like the idea of this library and it might makes a lot of scripting tasks easy!

The current implementation read all output of each command into memory, and then passes it to the next command and so forth.

For example:

It would be nice to avoid it if it is not necessery. Let each reader read as much as needed from previous stage and output whatever necessary to the next stage. This is also how pipes in bash works.

Real-time output to stdout in a long-running command

e.g. for command like

script.Exec("ping -c 100 www.google.com").Stdout()

I expect the result shows in stdout line by line just like real Bash script, instead of showing after the run.

Do you have tricks to get this done? or any plan/feature to implement this?

script.Exec().String() does not return the command output when the execution fails

This possibly happens in some cases only, where the exit code is some special number, but I can reproduce with a script like the following:

artificially lock the git repository

touch .git/index.lock

now try to run something like:

var out, _ = script.Exec("git checkout master").String()
fmt.Println(out) // will be empty

git checkout master would have instead printed sthing like:

fatal: Unable to create '<path>/go/.git/index.lock': File exists.

Another git process seems to be running in this repository, e.g.
an editor opened by 'git commit'. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
remove the file manually to continue.

but that output is not easily accessible. A workaround I am using is to ignore the error status of the pipeline:

var p = script.Exec("git checkout master")
var data, err = ioutil.ReadAll(p.Reader)
var out = string(data)
fmt.Println(out) // will contain the full output

I believe it should be enough to change https://github.com/bitfield/script/blob/master/sinks.go#L69 so that it returns whatever data it was able to read. I suspect the same (or similar) change should be made in Bytes() as well.

I'd be happy to try and provide PR :-)

Add SSH example

Many sysadmin tasks involve executing commands via SSH. Add an example program that does this, and identify any useful methods that could be added to script to make this easier.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.