Can a shell script set environment variables of the calling shell?
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
bash shell csh tcsh
add a comment |
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
bash shell csh tcsh
@eusoubrasileiro that's not working (at least on osx), as 'export' is interpreted by bash as a file name.
– drevicko
Jan 12 '16 at 11:48
see @Humberto Romero 's answer stackoverflow.com/a/28489593/881375 in this thread
– tomasb
Jul 14 '16 at 13:02
The title of this Q should be changed - the main differentiation is using two different shells, the title does not reflect that.
– yzorg
Jan 18 at 14:37
add a comment |
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
bash shell csh tcsh
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
bash shell csh tcsh
bash shell csh tcsh
edited Aug 31 '18 at 16:20
codeforester
18k84264
18k84264
asked Jan 30 '09 at 18:50
Larry GritzLarry Gritz
7,96843339
7,96843339
@eusoubrasileiro that's not working (at least on osx), as 'export' is interpreted by bash as a file name.
– drevicko
Jan 12 '16 at 11:48
see @Humberto Romero 's answer stackoverflow.com/a/28489593/881375 in this thread
– tomasb
Jul 14 '16 at 13:02
The title of this Q should be changed - the main differentiation is using two different shells, the title does not reflect that.
– yzorg
Jan 18 at 14:37
add a comment |
@eusoubrasileiro that's not working (at least on osx), as 'export' is interpreted by bash as a file name.
– drevicko
Jan 12 '16 at 11:48
see @Humberto Romero 's answer stackoverflow.com/a/28489593/881375 in this thread
– tomasb
Jul 14 '16 at 13:02
The title of this Q should be changed - the main differentiation is using two different shells, the title does not reflect that.
– yzorg
Jan 18 at 14:37
@eusoubrasileiro that's not working (at least on osx), as 'export' is interpreted by bash as a file name.
– drevicko
Jan 12 '16 at 11:48
@eusoubrasileiro that's not working (at least on osx), as 'export' is interpreted by bash as a file name.
– drevicko
Jan 12 '16 at 11:48
see @Humberto Romero 's answer stackoverflow.com/a/28489593/881375 in this thread
– tomasb
Jul 14 '16 at 13:02
see @Humberto Romero 's answer stackoverflow.com/a/28489593/881375 in this thread
– tomasb
Jul 14 '16 at 13:02
The title of this Q should be changed - the main differentiation is using two different shells, the title does not reflect that.
– yzorg
Jan 18 at 14:37
The title of this Q should be changed - the main differentiation is using two different shells, the title does not reflect that.
– yzorg
Jan 18 at 14:37
add a comment |
21 Answers
21
active
oldest
votes
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
2
This answer is not correct, or at least very misleading - this can be done by using the dot space script notation described in @Humberto 's answers
– Kris Randall
Jul 16 '18 at 6:44
5
@KrisRandall oh, the "dot space" script notation. You mean the dot operator that is synonymous with the source function I mentioned?
– converter42
Jul 17 '18 at 15:33
4
Thank you for the correction. I am appropriately embarrassed. I was looking for a quick answer for getting env vars to stay in my shell - not the same as the OP, which you have answered very well.
– Kris Randall
Jul 18 '18 at 19:39
add a comment |
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh
). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh
, but it's shorter to type and might work in some places where source
doesn't.
13
In other words, dot space is a replacement for bash'ssource
in other shells.
– stevesliva
Feb 12 '15 at 23:29
2
I've noticed that this will not work if one will pipe the output e.g ". ./script.sh | tee out.log"
– ozma
Apr 7 '15 at 11:52
4
I have no idea how or why this works but it works perfectly.
– ArtOfWarfare
Sep 10 '15 at 23:20
9
This answer should be at the Top
– tomasb
Jul 14 '16 at 13:01
4
Jip Should be at the top. Just stating the obvious.. if the script is in your PWD then it has the form of dot space dot eg . ./localscript.sh
– Max Robbertze
Jul 27 '16 at 9:02
|
show 7 more comments
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in
NAME1=VALUE1
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename
between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
I think this might be on the right track. But I don't quite know what should be in 'setit' that will allow it to run correctly from either shell. Can you spell out a little more what you had in mind?
– Larry Gritz
Jan 30 '09 at 19:19
1
Basically, it would check $0 and move into the appropriate part of the script based on what name it was called with.
– phresus
Jan 30 '09 at 20:30
1
I think what Thomas is saying, you write thesetit
script in one language, but it then outputs a language specific set of instructions that must beeval'd
by the calling process.
– matpie
Jan 30 '09 at 22:02
Aha, I see what you are doing now. Ugh, that's clever but awkward. Thanks for clarifying.
– Larry Gritz
Jan 31 '09 at 0:10
2
The SHELL variable isn't perfectly reliable. Example: on my ArchLinux system I run tcsh and SHELL is set to /bin/tcsh. Starting a bash and echoing SHELL still gives /bin/tcsh and ditto invoking bash as sh. SHELL only works in shells that bother to set it or on systems with rc files that set it, and not all do.
– Thomas Kammeyer
Apr 15 '15 at 21:32
|
show 2 more comments
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
This is exactly what I needed (well, I had to change the port number ;).
– Agos
Nov 30 '11 at 21:29
add a comment |
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
Kjetil, dude, this is fantastic. I am really enjoying your script right now.
– Heath Hunnicutt
Jul 26 '15 at 20:42
This is awesome! But how to do it in Mac?
– Li Dong
Oct 5 '15 at 1:41
1
thanks, as 1-liner it's:gdb -nx -p $$ --batch -ex 'call setenv("foo", "bar")' > & /dev/null
– Yinon Ehrlich
Apr 3 '16 at 6:58
Interesting approach. When I have the time I'll look into how do to it from OS X and update.
– Robert Brisita
May 4 '16 at 18:39
add a comment |
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo
to set the environment variable TEREDO_WORMS
:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL
set in the environment to the C shell, and the environment variable TEREDO_WORMS
is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo
to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit
(or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i
option to make an interactive shell). You could also add "$@"
after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i
if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${@-'-i'}"
The "${@-'-i'}"
bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i
for the non-existent arguments'.
1
Um, this is kind of drastic: you're replacing the login shell. If you're going to do this... you should check into how this impacts session and process group and other things. For example: what do you think happens to managed child processes?
– Thomas Kammeyer
Jan 30 '09 at 22:21
2
Undoubtedly - that's why I said I would not use it. If you exec twice, you've not lost session or process group information; that is based on PID and PID doesn't change. In a profile or login file, it gets you through a common language environment setting script. But, as I said, I would not use it.
– Jonathan Leffler
Jan 30 '09 at 22:32
1
This is exactly what I tried to do for my specific case ! This technique seems to be used by clearcase when doing "cleartool setview", which is what I try to emulate. Thanks a lot !
– Offirmo
Jun 6 '12 at 15:26
You could simply invoke a new shell, rather than replace the existing shell.
– Jonathon Hill
Apr 8 '13 at 14:50
1
@JonathonHill: You could (run a new shell as an ordinary command instead of doingexec
). The main reason not to do so is that you have a stray level of shell, so you'd have to do an extra control-D to logout in that window.
– Jonathan Leffler
Apr 8 '13 at 14:53
|
show 1 more comment
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
1
We use modulefiles extensively here, and csh/bourne-ish support is one reason. We have legacy csh scripts, bash scripts and python scripts, and they all get environment variable settings from the same modulefiles, rather than having an env.csh, env.sh, env.py set of scripts with the extra maintenance that entails. Additionally, modulefiles allow your environment to reflect version dependencies: if you need need to change to version 3 from version 4 of a tool, instead of resetting all your env vars manually, you can just module swap and everything changes over.
– Andrej Panjkov
May 15 '09 at 2:12
I couldnt find examples on how to use it, every attempt I made was unsuccessful, any tips?
– Aquarius Power
Jun 8 '14 at 5:38
1
@AquariusPower after so many years I don't recommend modules anymore, but its moral successor, which is lmod see tacc.utexas.edu/tacc-projects/lmod -- I think its docs are also better than the older modules, see if trying it is better for you
– Davide
Jun 9 '14 at 20:09
looks interesting! as soon I can gonna give a try, thx!
– Aquarius Power
Jun 10 '14 at 23:17
@LiDong - yes it has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): tacc.utexas.edu/research-development/tacc-projects/lmod
– Davide
Oct 6 '15 at 14:28
add a comment |
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset
):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun
):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
add a comment |
Add the -l flag in top of your bash script i.e.
#!/usr/bin/env bash -l
...
export NAME1="VALUE1"
export NAME2="VALUE2"
The values with NAME1
and NAME2
will now have been exported to your current environment, however these changes are not permanent. If you want them to be permanent you need to add them to your .bashrc
file or other init file.
From the man pages:
-l Make bash act as if it had been invoked as a login shell (see INVOCATION below).
Nope, doesn't actually work. All that happens is your script thinks it's running in a login shell. Still doesn't expose the variables to the calling shell.
– Endareth
Feb 13 '18 at 3:23
add a comment |
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read
does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
add a comment |
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
1
None of this will affect the calling shell.
– Ignacio Vazquez-Abrams
Oct 29 '10 at 6:20
@Ignacio, in this case you don't need to call scripts for setting environment variables. "Calling" shell will set the variable itself. But if we still need to separate setters from the main bashrc code, we can split all this functions into the separate file, and include it as a library (eg "source ru.olegchir.myproject.environment.setters.sh" in the .bashrc).
– Oleg Chirukhin
Nov 3 '10 at 13:20
add a comment |
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
add a comment |
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit
.
add a comment |
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
add a comment |
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX*
to name the environment you want. The module command is basically a fancy alias for the eval
mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
add a comment |
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
add a comment |
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$@"
else
kill -SIGUSR1 $$
echo "$@">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval "$CMD"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
add a comment |
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
1
Any reason you don't simply usealias unsetvar='unset http_proxy'
? Or better yet create a functionunsetvar () { unset http_proxy; }
– tripleee
Nov 23 '17 at 9:16
This is not only for OS X. This can work for Linux too. This answer would also be better if you wrote what files you are working in.
– Andreas Storvik Strauman
Apr 2 '18 at 10:23
add a comment |
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent
is to have the child process print an expression which the parent can eval
.
bash$ eval $(shh-agent)
For example, ssh-agent
has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh
and tcsh
uses setenv
to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo
script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
add a comment |
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
add a comment |
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.
How does Perl solve the problem? The Perl program still can't set the environment variables of the calling shell, can it?
– Larry Gritz
Jan 30 '09 at 19:16
No. It can, however, set it through Local::Env, then call your shell script with system() or backticks.
– phresus
Feb 2 '09 at 13:13
2
I'm pretty sure that system() or backticks would be making a new child shell, not calling to the shell that launched the Perl script.
– Larry Gritz
Feb 5 '09 at 20:30
add a comment |
protected by codeforester Aug 31 '18 at 16:19
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
21 Answers
21
active
oldest
votes
21 Answers
21
active
oldest
votes
active
oldest
votes
active
oldest
votes
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
2
This answer is not correct, or at least very misleading - this can be done by using the dot space script notation described in @Humberto 's answers
– Kris Randall
Jul 16 '18 at 6:44
5
@KrisRandall oh, the "dot space" script notation. You mean the dot operator that is synonymous with the source function I mentioned?
– converter42
Jul 17 '18 at 15:33
4
Thank you for the correction. I am appropriately embarrassed. I was looking for a quick answer for getting env vars to stay in my shell - not the same as the OP, which you have answered very well.
– Kris Randall
Jul 18 '18 at 19:39
add a comment |
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
2
This answer is not correct, or at least very misleading - this can be done by using the dot space script notation described in @Humberto 's answers
– Kris Randall
Jul 16 '18 at 6:44
5
@KrisRandall oh, the "dot space" script notation. You mean the dot operator that is synonymous with the source function I mentioned?
– converter42
Jul 17 '18 at 15:33
4
Thank you for the correction. I am appropriately embarrassed. I was looking for a quick answer for getting env vars to stay in my shell - not the same as the OP, which you have answered very well.
– Kris Randall
Jul 18 '18 at 19:39
add a comment |
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
edited Jun 5 '14 at 14:43
answered Jan 30 '09 at 19:06
converter42converter42
5,97812422
5,97812422
2
This answer is not correct, or at least very misleading - this can be done by using the dot space script notation described in @Humberto 's answers
– Kris Randall
Jul 16 '18 at 6:44
5
@KrisRandall oh, the "dot space" script notation. You mean the dot operator that is synonymous with the source function I mentioned?
– converter42
Jul 17 '18 at 15:33
4
Thank you for the correction. I am appropriately embarrassed. I was looking for a quick answer for getting env vars to stay in my shell - not the same as the OP, which you have answered very well.
– Kris Randall
Jul 18 '18 at 19:39
add a comment |
2
This answer is not correct, or at least very misleading - this can be done by using the dot space script notation described in @Humberto 's answers
– Kris Randall
Jul 16 '18 at 6:44
5
@KrisRandall oh, the "dot space" script notation. You mean the dot operator that is synonymous with the source function I mentioned?
– converter42
Jul 17 '18 at 15:33
4
Thank you for the correction. I am appropriately embarrassed. I was looking for a quick answer for getting env vars to stay in my shell - not the same as the OP, which you have answered very well.
– Kris Randall
Jul 18 '18 at 19:39
2
2
This answer is not correct, or at least very misleading - this can be done by using the dot space script notation described in @Humberto 's answers
– Kris Randall
Jul 16 '18 at 6:44
This answer is not correct, or at least very misleading - this can be done by using the dot space script notation described in @Humberto 's answers
– Kris Randall
Jul 16 '18 at 6:44
5
5
@KrisRandall oh, the "dot space" script notation. You mean the dot operator that is synonymous with the source function I mentioned?
– converter42
Jul 17 '18 at 15:33
@KrisRandall oh, the "dot space" script notation. You mean the dot operator that is synonymous with the source function I mentioned?
– converter42
Jul 17 '18 at 15:33
4
4
Thank you for the correction. I am appropriately embarrassed. I was looking for a quick answer for getting env vars to stay in my shell - not the same as the OP, which you have answered very well.
– Kris Randall
Jul 18 '18 at 19:39
Thank you for the correction. I am appropriately embarrassed. I was looking for a quick answer for getting env vars to stay in my shell - not the same as the OP, which you have answered very well.
– Kris Randall
Jul 18 '18 at 19:39
add a comment |
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh
). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh
, but it's shorter to type and might work in some places where source
doesn't.
13
In other words, dot space is a replacement for bash'ssource
in other shells.
– stevesliva
Feb 12 '15 at 23:29
2
I've noticed that this will not work if one will pipe the output e.g ". ./script.sh | tee out.log"
– ozma
Apr 7 '15 at 11:52
4
I have no idea how or why this works but it works perfectly.
– ArtOfWarfare
Sep 10 '15 at 23:20
9
This answer should be at the Top
– tomasb
Jul 14 '16 at 13:01
4
Jip Should be at the top. Just stating the obvious.. if the script is in your PWD then it has the form of dot space dot eg . ./localscript.sh
– Max Robbertze
Jul 27 '16 at 9:02
|
show 7 more comments
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh
). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh
, but it's shorter to type and might work in some places where source
doesn't.
13
In other words, dot space is a replacement for bash'ssource
in other shells.
– stevesliva
Feb 12 '15 at 23:29
2
I've noticed that this will not work if one will pipe the output e.g ". ./script.sh | tee out.log"
– ozma
Apr 7 '15 at 11:52
4
I have no idea how or why this works but it works perfectly.
– ArtOfWarfare
Sep 10 '15 at 23:20
9
This answer should be at the Top
– tomasb
Jul 14 '16 at 13:01
4
Jip Should be at the top. Just stating the obvious.. if the script is in your PWD then it has the form of dot space dot eg . ./localscript.sh
– Max Robbertze
Jul 27 '16 at 9:02
|
show 7 more comments
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh
). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh
, but it's shorter to type and might work in some places where source
doesn't.
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh
). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh
, but it's shorter to type and might work in some places where source
doesn't.
edited Oct 17 '17 at 12:41
Alan W. Smith
16.9k34978
16.9k34978
answered Feb 12 '15 at 23:04
Humberto RomeroHumberto Romero
2,547162
2,547162
13
In other words, dot space is a replacement for bash'ssource
in other shells.
– stevesliva
Feb 12 '15 at 23:29
2
I've noticed that this will not work if one will pipe the output e.g ". ./script.sh | tee out.log"
– ozma
Apr 7 '15 at 11:52
4
I have no idea how or why this works but it works perfectly.
– ArtOfWarfare
Sep 10 '15 at 23:20
9
This answer should be at the Top
– tomasb
Jul 14 '16 at 13:01
4
Jip Should be at the top. Just stating the obvious.. if the script is in your PWD then it has the form of dot space dot eg . ./localscript.sh
– Max Robbertze
Jul 27 '16 at 9:02
|
show 7 more comments
13
In other words, dot space is a replacement for bash'ssource
in other shells.
– stevesliva
Feb 12 '15 at 23:29
2
I've noticed that this will not work if one will pipe the output e.g ". ./script.sh | tee out.log"
– ozma
Apr 7 '15 at 11:52
4
I have no idea how or why this works but it works perfectly.
– ArtOfWarfare
Sep 10 '15 at 23:20
9
This answer should be at the Top
– tomasb
Jul 14 '16 at 13:01
4
Jip Should be at the top. Just stating the obvious.. if the script is in your PWD then it has the form of dot space dot eg . ./localscript.sh
– Max Robbertze
Jul 27 '16 at 9:02
13
13
In other words, dot space is a replacement for bash's
source
in other shells.– stevesliva
Feb 12 '15 at 23:29
In other words, dot space is a replacement for bash's
source
in other shells.– stevesliva
Feb 12 '15 at 23:29
2
2
I've noticed that this will not work if one will pipe the output e.g ". ./script.sh | tee out.log"
– ozma
Apr 7 '15 at 11:52
I've noticed that this will not work if one will pipe the output e.g ". ./script.sh | tee out.log"
– ozma
Apr 7 '15 at 11:52
4
4
I have no idea how or why this works but it works perfectly.
– ArtOfWarfare
Sep 10 '15 at 23:20
I have no idea how or why this works but it works perfectly.
– ArtOfWarfare
Sep 10 '15 at 23:20
9
9
This answer should be at the Top
– tomasb
Jul 14 '16 at 13:01
This answer should be at the Top
– tomasb
Jul 14 '16 at 13:01
4
4
Jip Should be at the top. Just stating the obvious.. if the script is in your PWD then it has the form of dot space dot eg . ./localscript.sh
– Max Robbertze
Jul 27 '16 at 9:02
Jip Should be at the top. Just stating the obvious.. if the script is in your PWD then it has the form of dot space dot eg . ./localscript.sh
– Max Robbertze
Jul 27 '16 at 9:02
|
show 7 more comments
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in
NAME1=VALUE1
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename
between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
I think this might be on the right track. But I don't quite know what should be in 'setit' that will allow it to run correctly from either shell. Can you spell out a little more what you had in mind?
– Larry Gritz
Jan 30 '09 at 19:19
1
Basically, it would check $0 and move into the appropriate part of the script based on what name it was called with.
– phresus
Jan 30 '09 at 20:30
1
I think what Thomas is saying, you write thesetit
script in one language, but it then outputs a language specific set of instructions that must beeval'd
by the calling process.
– matpie
Jan 30 '09 at 22:02
Aha, I see what you are doing now. Ugh, that's clever but awkward. Thanks for clarifying.
– Larry Gritz
Jan 31 '09 at 0:10
2
The SHELL variable isn't perfectly reliable. Example: on my ArchLinux system I run tcsh and SHELL is set to /bin/tcsh. Starting a bash and echoing SHELL still gives /bin/tcsh and ditto invoking bash as sh. SHELL only works in shells that bother to set it or on systems with rc files that set it, and not all do.
– Thomas Kammeyer
Apr 15 '15 at 21:32
|
show 2 more comments
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in
NAME1=VALUE1
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename
between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
I think this might be on the right track. But I don't quite know what should be in 'setit' that will allow it to run correctly from either shell. Can you spell out a little more what you had in mind?
– Larry Gritz
Jan 30 '09 at 19:19
1
Basically, it would check $0 and move into the appropriate part of the script based on what name it was called with.
– phresus
Jan 30 '09 at 20:30
1
I think what Thomas is saying, you write thesetit
script in one language, but it then outputs a language specific set of instructions that must beeval'd
by the calling process.
– matpie
Jan 30 '09 at 22:02
Aha, I see what you are doing now. Ugh, that's clever but awkward. Thanks for clarifying.
– Larry Gritz
Jan 31 '09 at 0:10
2
The SHELL variable isn't perfectly reliable. Example: on my ArchLinux system I run tcsh and SHELL is set to /bin/tcsh. Starting a bash and echoing SHELL still gives /bin/tcsh and ditto invoking bash as sh. SHELL only works in shells that bother to set it or on systems with rc files that set it, and not all do.
– Thomas Kammeyer
Apr 15 '15 at 21:32
|
show 2 more comments
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in
NAME1=VALUE1
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename
between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in
NAME1=VALUE1
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename
between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
edited May 20 '15 at 14:51
answered Jan 30 '09 at 19:06
Thomas KammeyerThomas Kammeyer
4,0161626
4,0161626
I think this might be on the right track. But I don't quite know what should be in 'setit' that will allow it to run correctly from either shell. Can you spell out a little more what you had in mind?
– Larry Gritz
Jan 30 '09 at 19:19
1
Basically, it would check $0 and move into the appropriate part of the script based on what name it was called with.
– phresus
Jan 30 '09 at 20:30
1
I think what Thomas is saying, you write thesetit
script in one language, but it then outputs a language specific set of instructions that must beeval'd
by the calling process.
– matpie
Jan 30 '09 at 22:02
Aha, I see what you are doing now. Ugh, that's clever but awkward. Thanks for clarifying.
– Larry Gritz
Jan 31 '09 at 0:10
2
The SHELL variable isn't perfectly reliable. Example: on my ArchLinux system I run tcsh and SHELL is set to /bin/tcsh. Starting a bash and echoing SHELL still gives /bin/tcsh and ditto invoking bash as sh. SHELL only works in shells that bother to set it or on systems with rc files that set it, and not all do.
– Thomas Kammeyer
Apr 15 '15 at 21:32
|
show 2 more comments
I think this might be on the right track. But I don't quite know what should be in 'setit' that will allow it to run correctly from either shell. Can you spell out a little more what you had in mind?
– Larry Gritz
Jan 30 '09 at 19:19
1
Basically, it would check $0 and move into the appropriate part of the script based on what name it was called with.
– phresus
Jan 30 '09 at 20:30
1
I think what Thomas is saying, you write thesetit
script in one language, but it then outputs a language specific set of instructions that must beeval'd
by the calling process.
– matpie
Jan 30 '09 at 22:02
Aha, I see what you are doing now. Ugh, that's clever but awkward. Thanks for clarifying.
– Larry Gritz
Jan 31 '09 at 0:10
2
The SHELL variable isn't perfectly reliable. Example: on my ArchLinux system I run tcsh and SHELL is set to /bin/tcsh. Starting a bash and echoing SHELL still gives /bin/tcsh and ditto invoking bash as sh. SHELL only works in shells that bother to set it or on systems with rc files that set it, and not all do.
– Thomas Kammeyer
Apr 15 '15 at 21:32
I think this might be on the right track. But I don't quite know what should be in 'setit' that will allow it to run correctly from either shell. Can you spell out a little more what you had in mind?
– Larry Gritz
Jan 30 '09 at 19:19
I think this might be on the right track. But I don't quite know what should be in 'setit' that will allow it to run correctly from either shell. Can you spell out a little more what you had in mind?
– Larry Gritz
Jan 30 '09 at 19:19
1
1
Basically, it would check $0 and move into the appropriate part of the script based on what name it was called with.
– phresus
Jan 30 '09 at 20:30
Basically, it would check $0 and move into the appropriate part of the script based on what name it was called with.
– phresus
Jan 30 '09 at 20:30
1
1
I think what Thomas is saying, you write the
setit
script in one language, but it then outputs a language specific set of instructions that must be eval'd
by the calling process.– matpie
Jan 30 '09 at 22:02
I think what Thomas is saying, you write the
setit
script in one language, but it then outputs a language specific set of instructions that must be eval'd
by the calling process.– matpie
Jan 30 '09 at 22:02
Aha, I see what you are doing now. Ugh, that's clever but awkward. Thanks for clarifying.
– Larry Gritz
Jan 31 '09 at 0:10
Aha, I see what you are doing now. Ugh, that's clever but awkward. Thanks for clarifying.
– Larry Gritz
Jan 31 '09 at 0:10
2
2
The SHELL variable isn't perfectly reliable. Example: on my ArchLinux system I run tcsh and SHELL is set to /bin/tcsh. Starting a bash and echoing SHELL still gives /bin/tcsh and ditto invoking bash as sh. SHELL only works in shells that bother to set it or on systems with rc files that set it, and not all do.
– Thomas Kammeyer
Apr 15 '15 at 21:32
The SHELL variable isn't perfectly reliable. Example: on my ArchLinux system I run tcsh and SHELL is set to /bin/tcsh. Starting a bash and echoing SHELL still gives /bin/tcsh and ditto invoking bash as sh. SHELL only works in shells that bother to set it or on systems with rc files that set it, and not all do.
– Thomas Kammeyer
Apr 15 '15 at 21:32
|
show 2 more comments
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
This is exactly what I needed (well, I had to change the port number ;).
– Agos
Nov 30 '11 at 21:29
add a comment |
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
This is exactly what I needed (well, I had to change the port number ;).
– Agos
Nov 30 '11 at 21:29
add a comment |
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
edited Sep 17 '16 at 16:22
GKFX
883926
883926
answered Nov 19 '11 at 23:46
chrischris
44142
44142
This is exactly what I needed (well, I had to change the port number ;).
– Agos
Nov 30 '11 at 21:29
add a comment |
This is exactly what I needed (well, I had to change the port number ;).
– Agos
Nov 30 '11 at 21:29
This is exactly what I needed (well, I had to change the port number ;).
– Agos
Nov 30 '11 at 21:29
This is exactly what I needed (well, I had to change the port number ;).
– Agos
Nov 30 '11 at 21:29
add a comment |
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
Kjetil, dude, this is fantastic. I am really enjoying your script right now.
– Heath Hunnicutt
Jul 26 '15 at 20:42
This is awesome! But how to do it in Mac?
– Li Dong
Oct 5 '15 at 1:41
1
thanks, as 1-liner it's:gdb -nx -p $$ --batch -ex 'call setenv("foo", "bar")' > & /dev/null
– Yinon Ehrlich
Apr 3 '16 at 6:58
Interesting approach. When I have the time I'll look into how do to it from OS X and update.
– Robert Brisita
May 4 '16 at 18:39
add a comment |
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
Kjetil, dude, this is fantastic. I am really enjoying your script right now.
– Heath Hunnicutt
Jul 26 '15 at 20:42
This is awesome! But how to do it in Mac?
– Li Dong
Oct 5 '15 at 1:41
1
thanks, as 1-liner it's:gdb -nx -p $$ --batch -ex 'call setenv("foo", "bar")' > & /dev/null
– Yinon Ehrlich
Apr 3 '16 at 6:58
Interesting approach. When I have the time I'll look into how do to it from OS X and update.
– Robert Brisita
May 4 '16 at 18:39
add a comment |
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
answered Jul 8 '11 at 22:03
Kjetil JoergensenKjetil Joergensen
1,3551010
1,3551010
Kjetil, dude, this is fantastic. I am really enjoying your script right now.
– Heath Hunnicutt
Jul 26 '15 at 20:42
This is awesome! But how to do it in Mac?
– Li Dong
Oct 5 '15 at 1:41
1
thanks, as 1-liner it's:gdb -nx -p $$ --batch -ex 'call setenv("foo", "bar")' > & /dev/null
– Yinon Ehrlich
Apr 3 '16 at 6:58
Interesting approach. When I have the time I'll look into how do to it from OS X and update.
– Robert Brisita
May 4 '16 at 18:39
add a comment |
Kjetil, dude, this is fantastic. I am really enjoying your script right now.
– Heath Hunnicutt
Jul 26 '15 at 20:42
This is awesome! But how to do it in Mac?
– Li Dong
Oct 5 '15 at 1:41
1
thanks, as 1-liner it's:gdb -nx -p $$ --batch -ex 'call setenv("foo", "bar")' > & /dev/null
– Yinon Ehrlich
Apr 3 '16 at 6:58
Interesting approach. When I have the time I'll look into how do to it from OS X and update.
– Robert Brisita
May 4 '16 at 18:39
Kjetil, dude, this is fantastic. I am really enjoying your script right now.
– Heath Hunnicutt
Jul 26 '15 at 20:42
Kjetil, dude, this is fantastic. I am really enjoying your script right now.
– Heath Hunnicutt
Jul 26 '15 at 20:42
This is awesome! But how to do it in Mac?
– Li Dong
Oct 5 '15 at 1:41
This is awesome! But how to do it in Mac?
– Li Dong
Oct 5 '15 at 1:41
1
1
thanks, as 1-liner it's:
gdb -nx -p $$ --batch -ex 'call setenv("foo", "bar")' > & /dev/null
– Yinon Ehrlich
Apr 3 '16 at 6:58
thanks, as 1-liner it's:
gdb -nx -p $$ --batch -ex 'call setenv("foo", "bar")' > & /dev/null
– Yinon Ehrlich
Apr 3 '16 at 6:58
Interesting approach. When I have the time I'll look into how do to it from OS X and update.
– Robert Brisita
May 4 '16 at 18:39
Interesting approach. When I have the time I'll look into how do to it from OS X and update.
– Robert Brisita
May 4 '16 at 18:39
add a comment |
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo
to set the environment variable TEREDO_WORMS
:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL
set in the environment to the C shell, and the environment variable TEREDO_WORMS
is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo
to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit
(or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i
option to make an interactive shell). You could also add "$@"
after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i
if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${@-'-i'}"
The "${@-'-i'}"
bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i
for the non-existent arguments'.
1
Um, this is kind of drastic: you're replacing the login shell. If you're going to do this... you should check into how this impacts session and process group and other things. For example: what do you think happens to managed child processes?
– Thomas Kammeyer
Jan 30 '09 at 22:21
2
Undoubtedly - that's why I said I would not use it. If you exec twice, you've not lost session or process group information; that is based on PID and PID doesn't change. In a profile or login file, it gets you through a common language environment setting script. But, as I said, I would not use it.
– Jonathan Leffler
Jan 30 '09 at 22:32
1
This is exactly what I tried to do for my specific case ! This technique seems to be used by clearcase when doing "cleartool setview", which is what I try to emulate. Thanks a lot !
– Offirmo
Jun 6 '12 at 15:26
You could simply invoke a new shell, rather than replace the existing shell.
– Jonathon Hill
Apr 8 '13 at 14:50
1
@JonathonHill: You could (run a new shell as an ordinary command instead of doingexec
). The main reason not to do so is that you have a stray level of shell, so you'd have to do an extra control-D to logout in that window.
– Jonathan Leffler
Apr 8 '13 at 14:53
|
show 1 more comment
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo
to set the environment variable TEREDO_WORMS
:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL
set in the environment to the C shell, and the environment variable TEREDO_WORMS
is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo
to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit
(or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i
option to make an interactive shell). You could also add "$@"
after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i
if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${@-'-i'}"
The "${@-'-i'}"
bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i
for the non-existent arguments'.
1
Um, this is kind of drastic: you're replacing the login shell. If you're going to do this... you should check into how this impacts session and process group and other things. For example: what do you think happens to managed child processes?
– Thomas Kammeyer
Jan 30 '09 at 22:21
2
Undoubtedly - that's why I said I would not use it. If you exec twice, you've not lost session or process group information; that is based on PID and PID doesn't change. In a profile or login file, it gets you through a common language environment setting script. But, as I said, I would not use it.
– Jonathan Leffler
Jan 30 '09 at 22:32
1
This is exactly what I tried to do for my specific case ! This technique seems to be used by clearcase when doing "cleartool setview", which is what I try to emulate. Thanks a lot !
– Offirmo
Jun 6 '12 at 15:26
You could simply invoke a new shell, rather than replace the existing shell.
– Jonathon Hill
Apr 8 '13 at 14:50
1
@JonathonHill: You could (run a new shell as an ordinary command instead of doingexec
). The main reason not to do so is that you have a stray level of shell, so you'd have to do an extra control-D to logout in that window.
– Jonathan Leffler
Apr 8 '13 at 14:53
|
show 1 more comment
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo
to set the environment variable TEREDO_WORMS
:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL
set in the environment to the C shell, and the environment variable TEREDO_WORMS
is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo
to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit
(or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i
option to make an interactive shell). You could also add "$@"
after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i
if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${@-'-i'}"
The "${@-'-i'}"
bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i
for the non-existent arguments'.
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo
to set the environment variable TEREDO_WORMS
:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL
set in the environment to the C shell, and the environment variable TEREDO_WORMS
is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo
to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit
(or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i
option to make an interactive shell). You could also add "$@"
after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i
if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${@-'-i'}"
The "${@-'-i'}"
bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i
for the non-existent arguments'.
edited Jun 7 '12 at 16:18
answered Jan 30 '09 at 22:17
Jonathan LefflerJonathan Leffler
567k916781030
567k916781030
1
Um, this is kind of drastic: you're replacing the login shell. If you're going to do this... you should check into how this impacts session and process group and other things. For example: what do you think happens to managed child processes?
– Thomas Kammeyer
Jan 30 '09 at 22:21
2
Undoubtedly - that's why I said I would not use it. If you exec twice, you've not lost session or process group information; that is based on PID and PID doesn't change. In a profile or login file, it gets you through a common language environment setting script. But, as I said, I would not use it.
– Jonathan Leffler
Jan 30 '09 at 22:32
1
This is exactly what I tried to do for my specific case ! This technique seems to be used by clearcase when doing "cleartool setview", which is what I try to emulate. Thanks a lot !
– Offirmo
Jun 6 '12 at 15:26
You could simply invoke a new shell, rather than replace the existing shell.
– Jonathon Hill
Apr 8 '13 at 14:50
1
@JonathonHill: You could (run a new shell as an ordinary command instead of doingexec
). The main reason not to do so is that you have a stray level of shell, so you'd have to do an extra control-D to logout in that window.
– Jonathan Leffler
Apr 8 '13 at 14:53
|
show 1 more comment
1
Um, this is kind of drastic: you're replacing the login shell. If you're going to do this... you should check into how this impacts session and process group and other things. For example: what do you think happens to managed child processes?
– Thomas Kammeyer
Jan 30 '09 at 22:21
2
Undoubtedly - that's why I said I would not use it. If you exec twice, you've not lost session or process group information; that is based on PID and PID doesn't change. In a profile or login file, it gets you through a common language environment setting script. But, as I said, I would not use it.
– Jonathan Leffler
Jan 30 '09 at 22:32
1
This is exactly what I tried to do for my specific case ! This technique seems to be used by clearcase when doing "cleartool setview", which is what I try to emulate. Thanks a lot !
– Offirmo
Jun 6 '12 at 15:26
You could simply invoke a new shell, rather than replace the existing shell.
– Jonathon Hill
Apr 8 '13 at 14:50
1
@JonathonHill: You could (run a new shell as an ordinary command instead of doingexec
). The main reason not to do so is that you have a stray level of shell, so you'd have to do an extra control-D to logout in that window.
– Jonathan Leffler
Apr 8 '13 at 14:53
1
1
Um, this is kind of drastic: you're replacing the login shell. If you're going to do this... you should check into how this impacts session and process group and other things. For example: what do you think happens to managed child processes?
– Thomas Kammeyer
Jan 30 '09 at 22:21
Um, this is kind of drastic: you're replacing the login shell. If you're going to do this... you should check into how this impacts session and process group and other things. For example: what do you think happens to managed child processes?
– Thomas Kammeyer
Jan 30 '09 at 22:21
2
2
Undoubtedly - that's why I said I would not use it. If you exec twice, you've not lost session or process group information; that is based on PID and PID doesn't change. In a profile or login file, it gets you through a common language environment setting script. But, as I said, I would not use it.
– Jonathan Leffler
Jan 30 '09 at 22:32
Undoubtedly - that's why I said I would not use it. If you exec twice, you've not lost session or process group information; that is based on PID and PID doesn't change. In a profile or login file, it gets you through a common language environment setting script. But, as I said, I would not use it.
– Jonathan Leffler
Jan 30 '09 at 22:32
1
1
This is exactly what I tried to do for my specific case ! This technique seems to be used by clearcase when doing "cleartool setview", which is what I try to emulate. Thanks a lot !
– Offirmo
Jun 6 '12 at 15:26
This is exactly what I tried to do for my specific case ! This technique seems to be used by clearcase when doing "cleartool setview", which is what I try to emulate. Thanks a lot !
– Offirmo
Jun 6 '12 at 15:26
You could simply invoke a new shell, rather than replace the existing shell.
– Jonathon Hill
Apr 8 '13 at 14:50
You could simply invoke a new shell, rather than replace the existing shell.
– Jonathon Hill
Apr 8 '13 at 14:50
1
1
@JonathonHill: You could (run a new shell as an ordinary command instead of doing
exec
). The main reason not to do so is that you have a stray level of shell, so you'd have to do an extra control-D to logout in that window.– Jonathan Leffler
Apr 8 '13 at 14:53
@JonathonHill: You could (run a new shell as an ordinary command instead of doing
exec
). The main reason not to do so is that you have a stray level of shell, so you'd have to do an extra control-D to logout in that window.– Jonathan Leffler
Apr 8 '13 at 14:53
|
show 1 more comment
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
1
We use modulefiles extensively here, and csh/bourne-ish support is one reason. We have legacy csh scripts, bash scripts and python scripts, and they all get environment variable settings from the same modulefiles, rather than having an env.csh, env.sh, env.py set of scripts with the extra maintenance that entails. Additionally, modulefiles allow your environment to reflect version dependencies: if you need need to change to version 3 from version 4 of a tool, instead of resetting all your env vars manually, you can just module swap and everything changes over.
– Andrej Panjkov
May 15 '09 at 2:12
I couldnt find examples on how to use it, every attempt I made was unsuccessful, any tips?
– Aquarius Power
Jun 8 '14 at 5:38
1
@AquariusPower after so many years I don't recommend modules anymore, but its moral successor, which is lmod see tacc.utexas.edu/tacc-projects/lmod -- I think its docs are also better than the older modules, see if trying it is better for you
– Davide
Jun 9 '14 at 20:09
looks interesting! as soon I can gonna give a try, thx!
– Aquarius Power
Jun 10 '14 at 23:17
@LiDong - yes it has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): tacc.utexas.edu/research-development/tacc-projects/lmod
– Davide
Oct 6 '15 at 14:28
add a comment |
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
1
We use modulefiles extensively here, and csh/bourne-ish support is one reason. We have legacy csh scripts, bash scripts and python scripts, and they all get environment variable settings from the same modulefiles, rather than having an env.csh, env.sh, env.py set of scripts with the extra maintenance that entails. Additionally, modulefiles allow your environment to reflect version dependencies: if you need need to change to version 3 from version 4 of a tool, instead of resetting all your env vars manually, you can just module swap and everything changes over.
– Andrej Panjkov
May 15 '09 at 2:12
I couldnt find examples on how to use it, every attempt I made was unsuccessful, any tips?
– Aquarius Power
Jun 8 '14 at 5:38
1
@AquariusPower after so many years I don't recommend modules anymore, but its moral successor, which is lmod see tacc.utexas.edu/tacc-projects/lmod -- I think its docs are also better than the older modules, see if trying it is better for you
– Davide
Jun 9 '14 at 20:09
looks interesting! as soon I can gonna give a try, thx!
– Aquarius Power
Jun 10 '14 at 23:17
@LiDong - yes it has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): tacc.utexas.edu/research-development/tacc-projects/lmod
– Davide
Oct 6 '15 at 14:28
add a comment |
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
edited Oct 6 '15 at 14:29
answered Feb 4 '09 at 18:52
DavideDavide
11.3k93961
11.3k93961
1
We use modulefiles extensively here, and csh/bourne-ish support is one reason. We have legacy csh scripts, bash scripts and python scripts, and they all get environment variable settings from the same modulefiles, rather than having an env.csh, env.sh, env.py set of scripts with the extra maintenance that entails. Additionally, modulefiles allow your environment to reflect version dependencies: if you need need to change to version 3 from version 4 of a tool, instead of resetting all your env vars manually, you can just module swap and everything changes over.
– Andrej Panjkov
May 15 '09 at 2:12
I couldnt find examples on how to use it, every attempt I made was unsuccessful, any tips?
– Aquarius Power
Jun 8 '14 at 5:38
1
@AquariusPower after so many years I don't recommend modules anymore, but its moral successor, which is lmod see tacc.utexas.edu/tacc-projects/lmod -- I think its docs are also better than the older modules, see if trying it is better for you
– Davide
Jun 9 '14 at 20:09
looks interesting! as soon I can gonna give a try, thx!
– Aquarius Power
Jun 10 '14 at 23:17
@LiDong - yes it has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): tacc.utexas.edu/research-development/tacc-projects/lmod
– Davide
Oct 6 '15 at 14:28
add a comment |
1
We use modulefiles extensively here, and csh/bourne-ish support is one reason. We have legacy csh scripts, bash scripts and python scripts, and they all get environment variable settings from the same modulefiles, rather than having an env.csh, env.sh, env.py set of scripts with the extra maintenance that entails. Additionally, modulefiles allow your environment to reflect version dependencies: if you need need to change to version 3 from version 4 of a tool, instead of resetting all your env vars manually, you can just module swap and everything changes over.
– Andrej Panjkov
May 15 '09 at 2:12
I couldnt find examples on how to use it, every attempt I made was unsuccessful, any tips?
– Aquarius Power
Jun 8 '14 at 5:38
1
@AquariusPower after so many years I don't recommend modules anymore, but its moral successor, which is lmod see tacc.utexas.edu/tacc-projects/lmod -- I think its docs are also better than the older modules, see if trying it is better for you
– Davide
Jun 9 '14 at 20:09
looks interesting! as soon I can gonna give a try, thx!
– Aquarius Power
Jun 10 '14 at 23:17
@LiDong - yes it has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): tacc.utexas.edu/research-development/tacc-projects/lmod
– Davide
Oct 6 '15 at 14:28
1
1
We use modulefiles extensively here, and csh/bourne-ish support is one reason. We have legacy csh scripts, bash scripts and python scripts, and they all get environment variable settings from the same modulefiles, rather than having an env.csh, env.sh, env.py set of scripts with the extra maintenance that entails. Additionally, modulefiles allow your environment to reflect version dependencies: if you need need to change to version 3 from version 4 of a tool, instead of resetting all your env vars manually, you can just module swap and everything changes over.
– Andrej Panjkov
May 15 '09 at 2:12
We use modulefiles extensively here, and csh/bourne-ish support is one reason. We have legacy csh scripts, bash scripts and python scripts, and they all get environment variable settings from the same modulefiles, rather than having an env.csh, env.sh, env.py set of scripts with the extra maintenance that entails. Additionally, modulefiles allow your environment to reflect version dependencies: if you need need to change to version 3 from version 4 of a tool, instead of resetting all your env vars manually, you can just module swap and everything changes over.
– Andrej Panjkov
May 15 '09 at 2:12
I couldnt find examples on how to use it, every attempt I made was unsuccessful, any tips?
– Aquarius Power
Jun 8 '14 at 5:38
I couldnt find examples on how to use it, every attempt I made was unsuccessful, any tips?
– Aquarius Power
Jun 8 '14 at 5:38
1
1
@AquariusPower after so many years I don't recommend modules anymore, but its moral successor, which is lmod see tacc.utexas.edu/tacc-projects/lmod -- I think its docs are also better than the older modules, see if trying it is better for you
– Davide
Jun 9 '14 at 20:09
@AquariusPower after so many years I don't recommend modules anymore, but its moral successor, which is lmod see tacc.utexas.edu/tacc-projects/lmod -- I think its docs are also better than the older modules, see if trying it is better for you
– Davide
Jun 9 '14 at 20:09
looks interesting! as soon I can gonna give a try, thx!
– Aquarius Power
Jun 10 '14 at 23:17
looks interesting! as soon I can gonna give a try, thx!
– Aquarius Power
Jun 10 '14 at 23:17
@LiDong - yes it has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): tacc.utexas.edu/research-development/tacc-projects/lmod
– Davide
Oct 6 '15 at 14:28
@LiDong - yes it has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): tacc.utexas.edu/research-development/tacc-projects/lmod
– Davide
Oct 6 '15 at 14:28
add a comment |
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset
):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun
):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
add a comment |
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset
):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun
):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
add a comment |
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset
):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun
):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset
):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun
):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
answered Jul 26 '13 at 19:21
dkinzerdkinzer
20.5k75576
20.5k75576
add a comment |
add a comment |
Add the -l flag in top of your bash script i.e.
#!/usr/bin/env bash -l
...
export NAME1="VALUE1"
export NAME2="VALUE2"
The values with NAME1
and NAME2
will now have been exported to your current environment, however these changes are not permanent. If you want them to be permanent you need to add them to your .bashrc
file or other init file.
From the man pages:
-l Make bash act as if it had been invoked as a login shell (see INVOCATION below).
Nope, doesn't actually work. All that happens is your script thinks it's running in a login shell. Still doesn't expose the variables to the calling shell.
– Endareth
Feb 13 '18 at 3:23
add a comment |
Add the -l flag in top of your bash script i.e.
#!/usr/bin/env bash -l
...
export NAME1="VALUE1"
export NAME2="VALUE2"
The values with NAME1
and NAME2
will now have been exported to your current environment, however these changes are not permanent. If you want them to be permanent you need to add them to your .bashrc
file or other init file.
From the man pages:
-l Make bash act as if it had been invoked as a login shell (see INVOCATION below).
Nope, doesn't actually work. All that happens is your script thinks it's running in a login shell. Still doesn't expose the variables to the calling shell.
– Endareth
Feb 13 '18 at 3:23
add a comment |
Add the -l flag in top of your bash script i.e.
#!/usr/bin/env bash -l
...
export NAME1="VALUE1"
export NAME2="VALUE2"
The values with NAME1
and NAME2
will now have been exported to your current environment, however these changes are not permanent. If you want them to be permanent you need to add them to your .bashrc
file or other init file.
From the man pages:
-l Make bash act as if it had been invoked as a login shell (see INVOCATION below).
Add the -l flag in top of your bash script i.e.
#!/usr/bin/env bash -l
...
export NAME1="VALUE1"
export NAME2="VALUE2"
The values with NAME1
and NAME2
will now have been exported to your current environment, however these changes are not permanent. If you want them to be permanent you need to add them to your .bashrc
file or other init file.
From the man pages:
-l Make bash act as if it had been invoked as a login shell (see INVOCATION below).
answered Jun 16 '14 at 12:43
cristobalcristobal
330212
330212
Nope, doesn't actually work. All that happens is your script thinks it's running in a login shell. Still doesn't expose the variables to the calling shell.
– Endareth
Feb 13 '18 at 3:23
add a comment |
Nope, doesn't actually work. All that happens is your script thinks it's running in a login shell. Still doesn't expose the variables to the calling shell.
– Endareth
Feb 13 '18 at 3:23
Nope, doesn't actually work. All that happens is your script thinks it's running in a login shell. Still doesn't expose the variables to the calling shell.
– Endareth
Feb 13 '18 at 3:23
Nope, doesn't actually work. All that happens is your script thinks it's running in a login shell. Still doesn't expose the variables to the calling shell.
– Endareth
Feb 13 '18 at 3:23
add a comment |
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read
does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
add a comment |
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read
does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
add a comment |
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read
does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read
does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
edited May 23 '17 at 12:34
Community♦
11
11
answered Sep 25 '14 at 14:23
klaus seklaus se
1,4391113
1,4391113
add a comment |
add a comment |
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
1
None of this will affect the calling shell.
– Ignacio Vazquez-Abrams
Oct 29 '10 at 6:20
@Ignacio, in this case you don't need to call scripts for setting environment variables. "Calling" shell will set the variable itself. But if we still need to separate setters from the main bashrc code, we can split all this functions into the separate file, and include it as a library (eg "source ru.olegchir.myproject.environment.setters.sh" in the .bashrc).
– Oleg Chirukhin
Nov 3 '10 at 13:20
add a comment |
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
1
None of this will affect the calling shell.
– Ignacio Vazquez-Abrams
Oct 29 '10 at 6:20
@Ignacio, in this case you don't need to call scripts for setting environment variables. "Calling" shell will set the variable itself. But if we still need to separate setters from the main bashrc code, we can split all this functions into the separate file, and include it as a library (eg "source ru.olegchir.myproject.environment.setters.sh" in the .bashrc).
– Oleg Chirukhin
Nov 3 '10 at 13:20
add a comment |
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
answered Oct 29 '10 at 6:15
Oleg ChirukhinOleg Chirukhin
4902417
4902417
1
None of this will affect the calling shell.
– Ignacio Vazquez-Abrams
Oct 29 '10 at 6:20
@Ignacio, in this case you don't need to call scripts for setting environment variables. "Calling" shell will set the variable itself. But if we still need to separate setters from the main bashrc code, we can split all this functions into the separate file, and include it as a library (eg "source ru.olegchir.myproject.environment.setters.sh" in the .bashrc).
– Oleg Chirukhin
Nov 3 '10 at 13:20
add a comment |
1
None of this will affect the calling shell.
– Ignacio Vazquez-Abrams
Oct 29 '10 at 6:20
@Ignacio, in this case you don't need to call scripts for setting environment variables. "Calling" shell will set the variable itself. But if we still need to separate setters from the main bashrc code, we can split all this functions into the separate file, and include it as a library (eg "source ru.olegchir.myproject.environment.setters.sh" in the .bashrc).
– Oleg Chirukhin
Nov 3 '10 at 13:20
1
1
None of this will affect the calling shell.
– Ignacio Vazquez-Abrams
Oct 29 '10 at 6:20
None of this will affect the calling shell.
– Ignacio Vazquez-Abrams
Oct 29 '10 at 6:20
@Ignacio, in this case you don't need to call scripts for setting environment variables. "Calling" shell will set the variable itself. But if we still need to separate setters from the main bashrc code, we can split all this functions into the separate file, and include it as a library (eg "source ru.olegchir.myproject.environment.setters.sh" in the .bashrc).
– Oleg Chirukhin
Nov 3 '10 at 13:20
@Ignacio, in this case you don't need to call scripts for setting environment variables. "Calling" shell will set the variable itself. But if we still need to separate setters from the main bashrc code, we can split all this functions into the separate file, and include it as a library (eg "source ru.olegchir.myproject.environment.setters.sh" in the .bashrc).
– Oleg Chirukhin
Nov 3 '10 at 13:20
add a comment |
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
add a comment |
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
add a comment |
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
answered May 18 '11 at 13:23
David LoveringDavid Lovering
111
111
add a comment |
add a comment |
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit
.
add a comment |
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit
.
add a comment |
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit
.
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit
.
answered Feb 28 '13 at 6:41
AndrewAndrew
2,45222126
2,45222126
add a comment |
add a comment |
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
add a comment |
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
add a comment |
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
edited Apr 10 '14 at 23:39
Garrett Hyde
4,28673743
4,28673743
answered Apr 10 '14 at 23:14
user1667208user1667208
291
291
add a comment |
add a comment |
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX*
to name the environment you want. The module command is basically a fancy alias for the eval
mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
add a comment |
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX*
to name the environment you want. The module command is basically a fancy alias for the eval
mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
add a comment |
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX*
to name the environment you want. The module command is basically a fancy alias for the eval
mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX*
to name the environment you want. The module command is basically a fancy alias for the eval
mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
answered Oct 23 '14 at 20:02
Howard HobbesHoward Hobbes
111
111
add a comment |
add a comment |
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
add a comment |
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
add a comment |
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
answered Oct 30 '15 at 15:14
sancho.ssancho.s
6,37383693
6,37383693
add a comment |
add a comment |
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$@"
else
kill -SIGUSR1 $$
echo "$@">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval "$CMD"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
add a comment |
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$@"
else
kill -SIGUSR1 $$
echo "$@">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval "$CMD"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
add a comment |
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$@"
else
kill -SIGUSR1 $$
echo "$@">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval "$CMD"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$@"
else
kill -SIGUSR1 $$
echo "$@">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval "$CMD"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
answered Sep 21 '16 at 21:43
Luiz Angelo Daros de LucaLuiz Angelo Daros de Luca
112
112
add a comment |
add a comment |
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
1
Any reason you don't simply usealias unsetvar='unset http_proxy'
? Or better yet create a functionunsetvar () { unset http_proxy; }
– tripleee
Nov 23 '17 at 9:16
This is not only for OS X. This can work for Linux too. This answer would also be better if you wrote what files you are working in.
– Andreas Storvik Strauman
Apr 2 '18 at 10:23
add a comment |
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
1
Any reason you don't simply usealias unsetvar='unset http_proxy'
? Or better yet create a functionunsetvar () { unset http_proxy; }
– tripleee
Nov 23 '17 at 9:16
This is not only for OS X. This can work for Linux too. This answer would also be better if you wrote what files you are working in.
– Andreas Storvik Strauman
Apr 2 '18 at 10:23
add a comment |
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
answered Jan 19 '17 at 9:26
Marton TataiMarton Tatai
1266
1266
1
Any reason you don't simply usealias unsetvar='unset http_proxy'
? Or better yet create a functionunsetvar () { unset http_proxy; }
– tripleee
Nov 23 '17 at 9:16
This is not only for OS X. This can work for Linux too. This answer would also be better if you wrote what files you are working in.
– Andreas Storvik Strauman
Apr 2 '18 at 10:23
add a comment |
1
Any reason you don't simply usealias unsetvar='unset http_proxy'
? Or better yet create a functionunsetvar () { unset http_proxy; }
– tripleee
Nov 23 '17 at 9:16
This is not only for OS X. This can work for Linux too. This answer would also be better if you wrote what files you are working in.
– Andreas Storvik Strauman
Apr 2 '18 at 10:23
1
1
Any reason you don't simply use
alias unsetvar='unset http_proxy'
? Or better yet create a function unsetvar () { unset http_proxy; }
– tripleee
Nov 23 '17 at 9:16
Any reason you don't simply use
alias unsetvar='unset http_proxy'
? Or better yet create a function unsetvar () { unset http_proxy; }
– tripleee
Nov 23 '17 at 9:16
This is not only for OS X. This can work for Linux too. This answer would also be better if you wrote what files you are working in.
– Andreas Storvik Strauman
Apr 2 '18 at 10:23
This is not only for OS X. This can work for Linux too. This answer would also be better if you wrote what files you are working in.
– Andreas Storvik Strauman
Apr 2 '18 at 10:23
add a comment |
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent
is to have the child process print an expression which the parent can eval
.
bash$ eval $(shh-agent)
For example, ssh-agent
has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh
and tcsh
uses setenv
to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo
script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
add a comment |
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent
is to have the child process print an expression which the parent can eval
.
bash$ eval $(shh-agent)
For example, ssh-agent
has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh
and tcsh
uses setenv
to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo
script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
add a comment |
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent
is to have the child process print an expression which the parent can eval
.
bash$ eval $(shh-agent)
For example, ssh-agent
has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh
and tcsh
uses setenv
to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo
script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent
is to have the child process print an expression which the parent can eval
.
bash$ eval $(shh-agent)
For example, ssh-agent
has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh
and tcsh
uses setenv
to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo
script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
answered Nov 23 '17 at 9:25
tripleeetripleee
92.1k13129184
92.1k13129184
add a comment |
add a comment |
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
add a comment |
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
add a comment |
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
answered Aug 31 '18 at 15:50
user1802263user1802263
62
62
add a comment |
add a comment |
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.
How does Perl solve the problem? The Perl program still can't set the environment variables of the calling shell, can it?
– Larry Gritz
Jan 30 '09 at 19:16
No. It can, however, set it through Local::Env, then call your shell script with system() or backticks.
– phresus
Feb 2 '09 at 13:13
2
I'm pretty sure that system() or backticks would be making a new child shell, not calling to the shell that launched the Perl script.
– Larry Gritz
Feb 5 '09 at 20:30
add a comment |
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.
How does Perl solve the problem? The Perl program still can't set the environment variables of the calling shell, can it?
– Larry Gritz
Jan 30 '09 at 19:16
No. It can, however, set it through Local::Env, then call your shell script with system() or backticks.
– phresus
Feb 2 '09 at 13:13
2
I'm pretty sure that system() or backticks would be making a new child shell, not calling to the shell that launched the Perl script.
– Larry Gritz
Feb 5 '09 at 20:30
add a comment |
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.
answered Jan 30 '09 at 19:08
phresusphresus
1,8421116
1,8421116
How does Perl solve the problem? The Perl program still can't set the environment variables of the calling shell, can it?
– Larry Gritz
Jan 30 '09 at 19:16
No. It can, however, set it through Local::Env, then call your shell script with system() or backticks.
– phresus
Feb 2 '09 at 13:13
2
I'm pretty sure that system() or backticks would be making a new child shell, not calling to the shell that launched the Perl script.
– Larry Gritz
Feb 5 '09 at 20:30
add a comment |
How does Perl solve the problem? The Perl program still can't set the environment variables of the calling shell, can it?
– Larry Gritz
Jan 30 '09 at 19:16
No. It can, however, set it through Local::Env, then call your shell script with system() or backticks.
– phresus
Feb 2 '09 at 13:13
2
I'm pretty sure that system() or backticks would be making a new child shell, not calling to the shell that launched the Perl script.
– Larry Gritz
Feb 5 '09 at 20:30
How does Perl solve the problem? The Perl program still can't set the environment variables of the calling shell, can it?
– Larry Gritz
Jan 30 '09 at 19:16
How does Perl solve the problem? The Perl program still can't set the environment variables of the calling shell, can it?
– Larry Gritz
Jan 30 '09 at 19:16
No. It can, however, set it through Local::Env, then call your shell script with system() or backticks.
– phresus
Feb 2 '09 at 13:13
No. It can, however, set it through Local::Env, then call your shell script with system() or backticks.
– phresus
Feb 2 '09 at 13:13
2
2
I'm pretty sure that system() or backticks would be making a new child shell, not calling to the shell that launched the Perl script.
– Larry Gritz
Feb 5 '09 at 20:30
I'm pretty sure that system() or backticks would be making a new child shell, not calling to the shell that launched the Perl script.
– Larry Gritz
Feb 5 '09 at 20:30
add a comment |
protected by codeforester Aug 31 '18 at 16:19
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
@eusoubrasileiro that's not working (at least on osx), as 'export' is interpreted by bash as a file name.
– drevicko
Jan 12 '16 at 11:48
see @Humberto Romero 's answer stackoverflow.com/a/28489593/881375 in this thread
– tomasb
Jul 14 '16 at 13:02
The title of this Q should be changed - the main differentiation is using two different shells, the title does not reflect that.
– yzorg
Jan 18 at 14:37