I noticed that the package.json has a script defined for sass, so I am testing “npm run sass” in place of the node-sass command
EDIT: I am getting a command not found error when attempting to run the sass script defined in the package.json as well. Any ideas what may be happening?
oh I bet node_modules/.bin
isn’t on your path at that point. You can either call out to node_modules/.bin/node-sass
explicitly or add it to your path with PATH=$PATH:./node_modules/.bin
I am noticing some issues. The npm install is not installing node-sass in src/app/node_modules so when I add it to my path it is not seeing node-sass. I am, however, seeing node-sass in src/node_modules. I am not sure if that is related, but the fact of the matter is that node-sass does not appear to be installed even though it is in the package.json.
Good news! I was able to resolve the node-sass issue by setting the PATH. However, when I export my .hart files to docker and run docker-compose up I am getting errors. when trying to load my app image the error is “The subcommand ‘start’ wasn’t recognized” and for the api image it says "found argument ‘./waitforit.sh’ which wasn’t expected, or isn’t valid in this context. I am looking into this now, but if anyone has any suggestions please share!
@nealajpatel I’m going to need some more info - what does your run hook look like (or pkg_svc_run)?
#For the api image:
#!/bin/sh
exec 2>&1
exec node {{pkg.path}}/api/api-{{pkg.version}}/app.js
#For the app image:
#!/bin/sh
exec 2>&1
exec node {{pkg.path}}/app/app-{{pkg.version}}/index.js
well that looks correct. I’m not sure where wait-for-it.sh would be coming from
Here is the wait-for-ir.sh. Also, as a note: This application was written by a
TopCoder team, and it has since been taken internally.
So I apologize for my limited knowledge of the application (I am continuing to understand the implementation)
#!/usr/bin/env bash
# Use this script to test if a given TCP host/port are available
cmdname=$(basename $0)
echoerr() { if [[ $QUIET -ne 1 ]]; then echo "$@" 1>&2; fi }
usage()
{
cat << USAGE >&2
Usage:
$cmdname host:port [-s] [-t timeout] [-- command args]
-h HOST | --host=HOST Host or IP under test
-p PORT | --port=PORT TCP port under test
Alternatively, you specify the host and port as host:port
-s | --strict Only execute subcommand if the test succeeds
-q | --quiet Don't output any status messages
-t TIMEOUT | --timeout=TIMEOUT
Timeout in seconds, zero for no timeout
-- COMMAND ARGS Execute command with args after the test finishes
USAGE
exit 1
}
wait_for()
{
if [[ $TIMEOUT -gt 0 ]]; then
echoerr "$cmdname: waiting $TIMEOUT seconds for $HOST:$PORT"
else
echoerr "$cmdname: waiting for $HOST:$PORT without a timeout"
fi
start_ts=$(date +%s)
while :
do
if [[ $ISBUSY -eq 1 ]]; then
nc -z $HOST $PORT
result=$?
else
(echo > /dev/tcp/$HOST/$PORT) >/dev/null 2>&1
result=$?
fi
if [[ $result -eq 0 ]]; then
end_ts=$(date +%s)
echoerr "$cmdname: $HOST:$PORT is available after $((end_ts - start_ts)) seconds"
break
fi
sleep 1
done
return $result
}
wait_for_wrapper()
{
# In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692
if [[ $QUIET -eq 1 ]]; then
timeout $BUSYTIMEFLAG $TIMEOUT $0 --quiet --child --host=$HOST --port=$PORT --timeout=$TIMEOUT &
else
timeout $BUSYTIMEFLAG $TIMEOUT $0 --child --host=$HOST --port=$PORT --timeout=$TIMEOUT &
fi
PID=$!
trap "kill -INT -$PID" INT
wait $PID
RESULT=$?
if [[ $RESULT -ne 0 ]]; then
echoerr "$cmdname: timeout occurred after waiting $TIMEOUT seconds for $HOST:$PORT"
fi
return $RESULT
}
# process arguments
while [[ $# -gt 0 ]]
do
case "$1" in
*:* )
hostport=(${1//:/ })
HOST=${hostport[0]}
PORT=${hostport[1]}
shift 1
;;
--child)
CHILD=1
shift 1
;;
-q | --quiet)
QUIET=1
shift 1
;;
-s | --strict)
STRICT=1
shift 1
;;
-h)
HOST="$2"
if [[ $HOST == "" ]]; then break; fi
shift 2
;;
--host=*)
HOST="${1#*=}"
shift 1
;;
-p)
PORT="$2"
if [[ $PORT == "" ]]; then break; fi
shift 2
;;
--port=*)
PORT="${1#*=}"
shift 1
;;
-t)
TIMEOUT="$2"
if [[ $TIMEOUT == "" ]]; then break; fi
shift 2
;;
--timeout=*)
TIMEOUT="${1#*=}"
shift 1
;;
--)
shift
CLI=("$@")
break
;;
--help)
usage
;;
*)
echoerr "Unknown argument: $1"
usage
;;
esac
done
if [[ "$HOST" == "" || "$PORT" == "" ]]; then
echoerr "Error: you need to provide a host and port to test."
usage
fi
TIMEOUT=${TIMEOUT:-15}
STRICT=${STRICT:-0}
CHILD=${CHILD:-0}
QUIET=${QUIET:-0}
# check to see if timeout is from busybox?
# check to see if timeout is from busybox?
TIMEOUT_PATH=$(realpath $(which timeout))
if [[ $TIMEOUT_PATH =~ "busybox" ]]; then
ISBUSY=1
BUSYTIMEFLAG="-t"
else
ISBUSY=0
BUSYTIMEFLAG=""
fi
if [[ $CHILD -gt 0 ]]; then
wait_for
RESULT=$?
exit $RESULT
else
if [[ $TIMEOUT -gt 0 ]]; then
wait_for_wrapper
RESULT=$?
else
wait_for
RESULT=$?
fi
fi
if [[ $CLI != "" ]]; then
if [[ $RESULT -ne 0 && $STRICT -eq 1 ]]; then
echoerr "$cmdname: strict mode, refusing to execute subprocess"
exit $RESULT
fi
exec "${CLI[@]}"
else
exit $RESULT
fi
oh sorry, I think I mis-typed. The wait-for-it.sh script is pretty commonly cargo-culted around in the docker community but I don’t see where you are calling it in your plan code or hooks.
Okay, the wait-for-it.sh appears in the plan.sh for the api image.
pkg_name=api
pkg_origin=npatel
pkg_version="0.1.0"
pkg_svc_user=app
pkg_deps=(core/node8 core/coreutils)
pkg_build_deps=(
core/node8
core/gcc
core/rsync
)
pkg_exports=(
[port]=http.listen_port
)
pkg_expose=(port)
do_begin() {
export SSL_CERT_FILE=/src/tls-ca-bundle.pem
}
# build step clears the previous dist dir, installs npm packages, and builds the angular app
do_build() {
pushd $HAB_CACHE_SRC_PATH/${pkg_dirname} > /dev/app
rsync -a $PLAN_CONTEXT/../ --exclude='/node_modules' --exclude='*.env' ./
pushd ./ > /dev/svr
npm install --production
fix_node_module_bins
popd > /dev/svr && popd > /dev/app
}
# install copies generated files / packages to the prefix directory
do_install() {
mkdir -p "${pkg_prefix}/api"
cp -vr "$HAB_CACHE_SRC_PATH/${pkg_dirname}" "${pkg_prefix}/api/"
chmod +x wait-for-it.sh
}
do_setup_environment() {
set_buildtime_env -f http_proxy "http://10.127.40.152:8000"
set_buildtime_env -f https_proxy "http://10.127.40.152:8000"
echo "finished setting up env"
}
fix_node_module_bins() {
echo "fixing node modules"
for b in node_modules/.bin/*; do
fix_interpreter $(readlink -f -n $b) core/coreutils bin/env
do ne
}
right, but where or how is it getting called?
It is executed when running docker-compose. It appears in the docker-compose.yml
Gotcha, well, you shouldn’t need that script anymore because Habitat will do all the waiting for you with service bindings. Can you share your docker-compose file?
version: '2'
services:
api:
image: npatel/api
depends_on:
- postgres
environment:
DATABASE_URL: postgres://$POSTGRES_USER:$POSTGRES_PASSWORD@postgres/$POSTGRES_DB
DATABASE_SSL: 0
SMTP_HOST: $SMTP_HOST
SMTP_USERNAME: $SMTP_USERNAME
SMTP_PASSWORD: $SMTP_PASSWORD
AUTOMATE_YES_EXE: $AUTOMATE_YES_EXE
AUTOMATE_NO_EXE: $AUTOMATE_NO_EXE
SEPARATOR_SPACES: $SEPARATOR_SPACES
ports:
- 3000:$API_PORT
volumes:
- ./config:/opt/app-root/src/data/
- ./exe:/api/exe/
command: ["./wait-for-it.sh", "db:5432", "--", "node", "./app.js"]
app:
image: npatel/app
ports:
- 8080:$APP_PORT
environment:
API_HOSTNAME:
API_PORT: $API_PORT
postgres:
image: postgres
expose:
- 5432
environment:
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_DB: $POSTGRES_DB
Ah okay. So you won’t be able to use the old docker-compose file.
Admittedly I don’t know anything about your app but you might want your compose to look more like:
version: '2'
services:
api:
image: npatel/api
depends_on:
- postgres
ports:
- 3000:$API_PORT
volumes:
- ./config:/opt/app-root/src/data/
- ./exe:/api/exe/
command: ["load", "--bind", "db:postgresql.default"]
app:
image: npatel/app
ports:
- 8080:$APP_PORT
postgres:
image: habitat/postgresql
expose:
- 5432
you’ll also want your plan for api to look like:
pkg_name=api
pkg_origin=npatel
pkg_version="0.1.0"
pkg_svc_user=app
pkg_deps=(core/node8 core/coreutils)
pkg_build_deps=(
core/node8
core/gcc
core/rsync
)
pkg_exports=(
[port]=http.listen_port
)
pkg_expose=(port)
pkg_binds=(
[db]="port"
)
do_begin() {
export SSL_CERT_FILE=/src/tls-ca-bundle.pem
}
# build step clears the previous dist dir, installs npm packages, and builds the angular app
do_build() {
pushd $HAB_CACHE_SRC_PATH/${pkg_dirname} > /dev/app
rsync -a $PLAN_CONTEXT/../ --exclude='/node_modules' --exclude='*.env' ./
pushd ./ > /dev/svr
npm install --production
fix_node_module_bins
popd > /dev/svr && popd > /dev/app
}
# install copies generated files / packages to the prefix directory
do_install() {
mkdir -p "${pkg_prefix}/api"
cp -vr "$HAB_CACHE_SRC_PATH/${pkg_dirname}" "${pkg_prefix}/api/"
}
do_setup_environment() {
set_buildtime_env -f http_proxy "http://10.127.40.152:8000"
set_buildtime_env -f https_proxy "http://10.127.40.152:8000"
echo "finished setting up env"
}
fix_node_module_bins() {
echo "fixing node modules"
for b in node_modules/.bin/*; do
fix_interpreter $(readlink -f -n $b) core/coreutils bin/env
do ne
}
and a config file that references the db.
You can read more about bindings here: https://www.habitat.sh/docs/developing-packages/#runtime-binding
I have modified the plan.sh and docker-compose. The docker compose is below:
I am unable to test due to an issue being looked into with installing the pkg_deps
version: '2'
services:
api:
image: npatel/api
depends_on:
- postgres
environment:
DATABASE_URL: postgres://$POSTGRES_USER:$POSTGRES_PASSWORD@postgres/$POSTGRES_DB
DATABASE_SSL: 0
SMTP_HOST: $SMTP_HOST
SMTP_USERNAME: $SMTP_USERNAME
SMTP_PASSWORD: $SMTP_PASSWORD
AUTOMATE_YES_EXE: $AUTOMATE_YES_EXE
AUTOMATE_NO_EXE: $AUTOMATE_NO_EXE
SEPARATOR_SPACES: $SEPARATOR_SPACES
ports:
- 3000:$API_PORT
volumes:
- ./config:/opt/app-root/src/data/
- ./exe:/api/exe/
command: ["load", "--bind", "db:postgresql.default", "--", "node", "./app.js"]
app:
image: npatel/app
ports:
- 8080:$APP_PORT
environment:
API_HOSTNAME:
API_PORT: $API_PORT
postgres:
image: habitat/postgresql
expose:
- 5432
environment:
POSTGRES_USER: $POSTGRES_USER
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
POSTGRES_DB: $POSTGRES_DB
@nealajpatel we uploaded a “rebuild of the world” yesterday which is why you’re seeing the dep conflicts. You’ll need to rebuilt any of your packages. Additionally you may need to start with a clean studio (on linux you’ll want to hab studio rm
)
I might also suggest that you try some of the learn chef tutorials. This one in particular has some info on setting up a docker-compose file https://learn.chef.io/modules/try-habitat#/habitat-deploy