
Jupyter의 경우 websocket을 사용하기 떄문이 역방향 프록시(reverse proxy)를 설정할 경우 위와 같은 추가 설정이 필요하다
'인공지능 & 딥러닝 > ▷ Tool' 카테고리의 다른 글
Splunk의 간단한 쿼리 예제 (0) | 2021.08.11 |
---|---|
Colab과 Colab Pro (0) | 2021.08.11 |
Jupyter의 경우 websocket을 사용하기 떄문이 역방향 프록시(reverse proxy)를 설정할 경우 위와 같은 추가 설정이 필요하다
Splunk의 간단한 쿼리 예제 (0) | 2021.08.11 |
---|---|
Colab과 Colab Pro (0) | 2021.08.11 |
Splunk를 업무에서 자주 사용하게 되는데 초반에 공부할때 이해하기 위해 즐겨 찾기 했던 사이트의 내용을 발췌하였습니다.
출처: http://www.innovato.com/splunk/SQLSplunk.html
Splunk for SQL Users
index index All values and fields are indexed in Splunk, so there is no need to manually add, update, drop, or even think about indexing columns. Everything can be quickly retrieved automatically.
www.innovato.com
SQL query | Splunk search | A Splunk search retrieves indexed data and can perform transforming and reporting operations. Results from one search can be "piped" (i.e., transferred) from command to command, to filter, modify, reorder, and group your results. |
table / view | search results | Search results can be thought of as a database view, a dynamically generated table of rows, with columns. |
index | index | All values and fields are indexed in Splunk, so there is no need to manually add, update, drop, or even think about indexing columns. Everything can be quickly retrieved automatically. |
row | result / event | A result in Splunk is a list of field (i.e., column) values, corresponding to a table row. An event is a result that has a timestamp and raw text. Typically in event is a record from a log file, such as: 173.26.34.223 - - [01/Jul/2009:12:05:27 -0700] "GET /trade/app?action=logout HTTP/1.1" 200 2953 |
column | field | Fields in Splunk are dynamically returned from a search, meaning that one search might return a set of fields, while another search might return another set. After teaching Splunk how to extract out more fields from the raw underlying data, the same search will return more fields that it previously did. Fields in Splunk are not tied to a datatype. |
database / schema | index / app | In Splunk, an index is a collection of data, somewhat like a database has a collection of tables. Domain knowledge of that data, how to extract it, what reports to run, etc, are stored in a Splunk app. |
SQL FeatureSQL ExampleSplunk Example
SELECT * | SELECT * FROM mytable |
source=mytable |
WHERE | SELECT * FROM mytable WHERE mycolumn=5 |
source=mytable mycolumn=5 |
SELECT | SELECT mycolumn1, mycolumn2 FROM mytable |
source=mytable | FIELDS mycolumn1, mycolumn2 |
AND / OR | SELECT * FROM mytable WHERE (mycolumn1="true" OR mycolumn2="red") AND mycolumn3="blue" |
source=mytable AND (mycolumn1="true" OR mycolumn2="red") AND mycolumn3="blue" |
AS (alias) | SELECT mycolumn AS column_alias FROM mytable |
source=mytable | RENAME mycolumn as column_alias | FIELDS column_alias |
BETWEEN | SELECT * FROM mytable WHERE mycolumn BETWEEN 1 AND 5 |
source=mytable mycolumn<=1 mycolumn>=5 |
GROUP BY | SELECT mycolumn, avg(mycolumn) FROM mytable WHERE mycolumn=value GROUP BY mycolumn |
source=mytable mycolumn=value | STATS avg(mycolumn) BY mycolumn | FIELDS mycolumn, avg(mycolumn) |
HAVING | SELECT mycolumn, avg(mycolumn) FROM mytable WHERE mycolumn=value GROUP BY mycolumn HAVING avg(mycolumn)=value |
source=mytable mycolumn=value | STATS avg(mycolumn) BY mycolumn | SEARCH avg(mycolumn)=value | FIELDS mycolumn, avg(mycolumn) |
LIKE | SELECT * FROM mytable WHERE mycolumn LIKE "%some text%" |
source=mytable mycolumn="*some text*" Note: The most common search usage in Splunk is actually something that is nearly impossible in SQL -- to search all fields for a substring. The following search will return all rows that contain "some text" anywhere. source=mytable "some text" |
ORDER BY | SELECT * FROM mytable ORDER BY mycolumn desc |
source=mytable | SORT -mycolumn |
SELECT DISTINCT | SELECT DISTINCT mycolumn1, mycolumn2 FROM mytable |
source=mytable | DEDUP mycolumn1 | FIELDS mycolumn1, mycolumn2 |
SELECT TOP | SELECT TOP 5 mycolumn1, mycolumn2 FROM mytable |
source=mytable | TOP mycolumn1, mycolumn2 |
INNER JOIN | SELECT * FROM mytable1 INNER JOIN mytable2 ON mytable1.mycolumn=mytable2.mycolumn |
source=mytable1 | JOIN type=inner mycolumn [ SEARCH source=mytable2 ] Note: Joins in Splunk can be achieved as above, or by two other methods:
|
LEFT (OUTER) JOIN | SELECT * FROM mytable1 LEFT JOIN mytable2 ON mytable1.mycolumn=mytable2.mycolumn |
source=mytable1 | JOIN type=left mycolumn [ SEARCH source=mytable2 ] |
SELECT INTO | SELECT * INTO new_mytable IN mydb2 FROM old_mytable |
source=old_mytable | EVAL source=new_mytable | COLLECT index=mydb2 Note: COLLECT is typically used to store expensively calculated fields back into Splunk so that future access is much faster. This current example is atypical but shown for comparison with SQL's command. source will be renamed orig_source |
TRUNCATE TABLE | TRUNCATE TABLE mytable | source=mytable | DELETE |
INSERT INTO | INSERT INTO mytable VALUES (value1, value2, value3,....) |
Note: see SELECT INTO. Individual records are not added via the search language, but can be added via the API if need be. |
UNION | SELECT mycolumn FROM mytable1 UNION SELECT mycolumn FROM mytable2 |
source=mytable1 | APPEND [ SEARCH source=mytable2] | DEDUP mycolumn |
UNION ALL | SELECT * FROM mytable1 UNION ALL SELECT * FROM mytable2 |
source=mytable1 | APPEND [ SEARCH source=mytable2] |
DELETE | DELETE FROM mytable WHERE mycolumn=5 |
source=mytable1 mycolumn=5 | DELETE |
UPDATE | UPDATE mytable SET column1=value, column2=value,... WHERE some_column=some_value |
Note: There are a few things to think about when updating records in Splunk. First, you can just add the new values into Splunk (see INSERT INTO) and not worry about deleting the old values, because Splunk always returns the most recent results first. Second, on retrieval, you can always de-duplicate the results to ensure only the latest values are used (see SELECT DISTINCT). Finally, you can actually delete the old records (see DELETE). |
Jupyter에 대한 Synology 역방향 프록시 설정 (0) | 2021.09.08 |
---|---|
Colab과 Colab Pro (0) | 2021.08.11 |
기본적으로 Colab은 무료입니다.
아래와 같은 사양을 제공합니다.
- CPU: Intel Xeon 2.2GHz 또는 Intel Xeon 2.3GHz
- GPU : K80 또는 T4
- RAM: 13GB
- 제약사항 : 90분간 미사용시 중지 최대 12시간 연속 사용 가능
하지만 Colab Pro라고 유료 모델이 있습니다.
한달에 9.99달러인데 싸다면 쌀수 있고 비싸다면 비쌀 수 있는 사양입니다.
(부가세 별도이기 때문에 실제 결제 금액은 10.62달러입니다)
Pro의 경우 다음과 같은 사양을 제공합니다.
- CPU: Intel Xeon 2.3GHz
- GPU : T4 또는 P100
- RAM: 25GB
- 제약사항 : 최대 24시간 연속 사용 가능
사양보다도 12시간의 제약이 더 스트레스를 받게 되는데 필요한 경우 구독을 해보는것도 나쁘지 않을것 같습니다.
2021-08-12 기준으로 현재 colab의 GPU instance에 들어있는 파이썬 패키지는 다음과 같습니다.
absl-py==0.12.0 alabaster==0.7.12 albumentations==0.1.12 altair==4.1.0 appdirs==1.4.4 argon2-cffi==20.1.0 arviz==0.11.2 astor==0.8.1 astropy==4.2.1 astunparse==1.6.3 async-generator==1.10 atari-py==0.2.9 atomicwrites==1.4.0 attrs==21.2.0 audioread==2.1.9 autograd==1.3 Babel==2.9.1 backcall==0.2.0 beautifulsoup4==4.6.3 bleach==3.3.0 blis==0.4.1 bokeh==2.3.3 Bottleneck==1.3.2 branca==0.4.2 bs4==0.0.1 CacheControl==0.12.6 cached-property==1.5.2 cachetools==4.2.2 catalogue==1.0.0 certifi==2021.5.30 cffi==1.14.6 cftime==1.5.0 chardet==3.0.4 charset-normalizer==2.0.2 click==7.1.2 cloudpickle==1.3.0 cmake==3.12.0 cmdstanpy==0.9.5 colorcet==2.0.6 colorlover==0.3.0 community==1.0.0b1 contextlib2==0.5.5 convertdate==2.3.2 coverage==3.7.1 coveralls==0.5 crcmod==1.7 cufflinks==0.17.3 cvxopt==1.2.6 cvxpy==1.0.31 cycler==0.10.0 cymem==2.0.5 Cython==0.29.23 daft==0.0.4 dask==2.12.0 datascience==0.10.6 debugpy==1.0.0 decorator==4.4.2 defusedxml==0.7.1 descartes==1.1.0 dill==0.3.4 distributed==1.25.3 dlib @ file:///dlib-19.18.0-cp37-cp37m-linux_x86_64.whl dm-tree==0.1.6 docopt==0.6.2 docutils==0.17.1 dopamine-rl==1.0.5 earthengine-api==0.1.272 easydict==1.9 ecos==2.0.7.post1 editdistance==0.5.3 en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.5/en_core_web_sm-2.2.5.tar.gz entrypoints==0.3 ephem==4.0.0.2 et-xmlfile==1.1.0 fa2==0.3.5 fastai==1.0.61 fastdtw==0.3.4 fastprogress==1.0.0 fastrlock==0.6 fbprophet==0.7.1 feather-format==0.4.1 filelock==3.0.12 firebase-admin==4.4.0 fix-yahoo-finance==0.0.22 Flask==1.1.4 flatbuffers==1.12 folium==0.8.3 future==0.16.0 gast==0.4.0 GDAL==2.2.2 gdown==3.6.4 gensim==3.6.0 geographiclib==1.52 geopy==1.17.0 gin-config==0.4.0 glob2==0.7 google==2.0.3 google-api-core==1.26.3 google-api-python-client==1.12.8 google-auth==1.32.1 google-auth-httplib2==0.0.4 google-auth-oauthlib==0.4.4 google-cloud-bigquery==1.21.0 google-cloud-bigquery-storage==1.1.0 google-cloud-core==1.0.3 google-cloud-datastore==1.8.0 google-cloud-firestore==1.7.0 google-cloud-language==1.2.0 google-cloud-storage==1.18.1 google-cloud-translate==1.5.0 google-colab @ file:///colabtools/dist/google-colab-1.0.0.tar.gz google-pasta==0.2.0 google-resumable-media==0.4.1 googleapis-common-protos==1.53.0 googledrivedownloader==0.4 graphviz==0.10.1 greenlet==1.1.0 grpcio==1.34.1 gspread==3.0.1 gspread-dataframe==3.0.8 gym==0.17.3 h5py==3.1.0 HeapDict==1.0.1 hijri-converter==2.1.3 holidays==0.10.5.2 holoviews==1.14.4 html5lib==1.0.1 httpimport==0.5.18 httplib2==0.17.4 httplib2shim==0.0.3 humanize==0.5.1 hyperopt==0.1.2 ideep4py==2.0.0.post3 idna==2.10 imageio==2.4.1 imagesize==1.2.0 imbalanced-learn==0.4.3 imblearn==0.0 imgaug==0.2.9 importlib-metadata==4.6.1 importlib-resources==5.2.0 imutils==0.5.4 inflect==2.1.0 iniconfig==1.1.1 install==1.3.4 intel-openmp==2021.3.0 intervaltree==2.1.0 ipykernel==4.10.1 ipython==5.5.0 ipython-genutils==0.2.0 ipython-sql==0.3.9 ipywidgets==7.6.3 itsdangerous==1.1.0 jax==0.2.17 jaxlib @ https://storage.googleapis.com/jax-releases/cuda110/jaxlib-0.1.69+cuda110-cp37-none-manylinux2010_x86_64.whl jdcal==1.4.1 jedi==0.18.0 jieba==0.42.1 Jinja2==2.11.3 joblib==1.0.1 jpeg4py==0.1.4 jsonschema==2.6.0 jupyter==1.0.0 jupyter-client==5.3.5 jupyter-console==5.2.0 jupyter-core==4.7.1 jupyterlab-pygments==0.1.2 jupyterlab-widgets==1.0.0 kaggle==1.5.12 kapre==0.3.5 Keras==2.4.3 keras-nightly==2.5.0.dev2021032900 Keras-Preprocessing==1.1.2 keras-vis==0.4.1 kiwisolver==1.3.1 korean-lunar-calendar==0.2.1 librosa==0.8.1 lightgbm==2.2.3 llvmlite==0.34.0 lmdb==0.99 LunarCalendar==0.0.9 lxml==4.2.6 Markdown==3.3.4 MarkupSafe==2.0.1 matplotlib==3.2.2 matplotlib-inline==0.1.2 matplotlib-venn==0.11.6 missingno==0.5.0 mistune==0.8.4 mizani==0.6.0 mkl==2019.0 mlxtend==0.14.0 more-itertools==8.8.0 moviepy==0.2.3.5 mpmath==1.2.1 msgpack==1.0.2 multiprocess==0.70.12.2 multitasking==0.0.9 murmurhash==1.0.5 music21==5.5.0 natsort==5.5.0 nbclient==0.5.3 nbconvert==5.6.1 nbformat==5.1.3 nest-asyncio==1.5.1 netCDF4==1.5.7 networkx==2.5.1 nibabel==3.0.2 nltk==3.2.5 notebook==5.3.1 numba==0.51.2 numexpr==2.7.3 numpy==1.19.5 nvidia-ml-py3==7.352.0 oauth2client==4.1.3 oauthlib==3.1.1 okgrade==0.4.3 opencv-contrib-python==4.1.2.30 opencv-python==4.1.2.30 openpyxl==2.5.9 opt-einsum==3.3.0 osqp==0.6.2.post0 packaging==21.0 palettable==3.3.0 pandas==1.1.5 pandas-datareader==0.9.0 pandas-gbq==0.13.3 pandas-profiling==1.4.1 pandocfilters==1.4.3 panel==0.11.3 param==1.11.1 parso==0.8.2 pathlib==1.0.1 patsy==0.5.1 pexpect==4.8.0 pickleshare==0.7.5 Pillow==7.1.2 pip-tools==4.5.1 plac==1.1.3 plotly==4.4.1 plotnine==0.6.0 pluggy==0.7.1 pooch==1.4.0 portpicker==1.3.9 prefetch-generator==1.0.1 preshed==3.0.5 prettytable==2.1.0 progressbar2==3.38.0 prometheus-client==0.11.0 promise==2.3 prompt-toolkit==1.0.18 protobuf==3.17.3 psutil==5.4.8 psycopg2==2.7.6.1 ptyprocess==0.7.0 py==1.10.0 pyarrow==3.0.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycocotools==2.0.2 pycparser==2.20 pyct==0.4.8 pydata-google-auth==1.2.0 pydot==1.3.0 pydot-ng==2.0.0 pydotplus==2.0.2 PyDrive==1.3.1 pyemd==0.5.1 pyerfa==2.0.0 pyglet==1.5.0 Pygments==2.6.1 pygobject==3.26.1 pymc3==3.11.2 PyMeeus==0.5.11 pymongo==3.11.4 pymystem3==0.2.0 PyOpenGL==3.1.5 pyparsing==2.4.7 pyrsistent==0.18.0 pysndfile==1.3.8 PySocks==1.7.1 pystan==2.19.1.1 pytest==3.6.4 python-apt==0.0.0 python-chess==0.23.11 python-dateutil==2.8.1 python-louvain==0.15 python-slugify==5.0.2 python-utils==2.5.6 pytz==2018.9 pyviz-comms==2.1.0 PyWavelets==1.1.1 PyYAML==3.13 pyzmq==22.1.0 qdldl==0.1.5.post0 qtconsole==5.1.1 QtPy==1.9.0 regex==2019.12.20 requests==2.23.0 requests-oauthlib==1.3.0 resampy==0.2.2 retrying==1.3.3 rpy2==3.4.5 rsa==4.7.2 scikit-image==0.16.2 scikit-learn==0.22.2.post1 scipy==1.4.1 screen-resolution-extra==0.0.0 scs==2.1.4 seaborn==0.11.1 semver==2.13.0 Send2Trash==1.7.1 setuptools-git==1.2 Shapely==1.7.1 simplegeneric==0.8.1 six==1.15.0 sklearn==0.0 sklearn-pandas==1.8.0 smart-open==5.1.0 snowballstemmer==2.1.0 sortedcontainers==2.4.0 SoundFile==0.10.3.post1 spacy==2.2.4 Sphinx==1.8.5 sphinxcontrib-serializinghtml==1.1.5 sphinxcontrib-websupport==1.2.4 SQLAlchemy==1.4.20 sqlparse==0.4.1 srsly==1.0.5 statsmodels==0.10.2 sympy==1.7.1 tables==3.4.4 tabulate==0.8.9 tblib==1.7.0 tensorboard==2.5.0 tensorboard-data-server==0.6.1 tensorboard-plugin-wit==1.8.0 tensorflow @ file:///tensorflow-2.5.0-cp37-cp37m-linux_x86_64.whl tensorflow-datasets==4.0.1 tensorflow-estimator==2.5.0 tensorflow-gcs-config==2.5.0 tensorflow-hub==0.12.0 tensorflow-metadata==1.1.0 tensorflow-probability==0.13.0 termcolor==1.1.0 terminado==0.10.1 testpath==0.5.0 text-unidecode==1.3 textblob==0.15.3 Theano-PyMC==1.1.2 thinc==7.4.0 tifffile==2021.7.2 toml==0.10.2 toolz==0.11.1 torch @ https://download.pytorch.org/whl/cu102/torch-1.9.0%2Bcu102-cp37-cp37m-linux_x86_64.whl torchsummary==1.5.1 torchtext==0.10.0 torchvision @ https://download.pytorch.org/whl/cu102/torchvision-0.10.0%2Bcu102-cp37-cp37m-linux_x86_64.whl tornado==5.1.1 tqdm==4.41.1 traitlets==5.0.5 tweepy==3.10.0 typeguard==2.7.1 typing-extensions==3.7.4.3 tzlocal==1.5.1 uritemplate==3.0.1 urllib3==1.24.3 vega-datasets==0.9.0 wasabi==0.8.2 wcwidth==0.2.5 webencodings==0.5.1 Werkzeug==1.0.1 widgetsnbextension==3.5.1 wordcloud==1.5.0 wrapt==1.12.1 xarray==0.18.2 xgboost==0.90 xkit==0.0.0 xlrd==1.1.0 xlwt==1.3.0 yellowbrick==0.9.1 zict==2.0.0 zipp==3.5.0 |
Jupyter에 대한 Synology 역방향 프록시 설정 (0) | 2021.09.08 |
---|---|
Splunk의 간단한 쿼리 예제 (0) | 2021.08.11 |