ChainAggLib - library for aggregation of main chain tickersLibrary "ChainAggLib"
ChainAggLib — token -> main protocol coin (chain) and top-5 exchange tickers for volume aggregation.
Library only (no plots). All helpers are pure functions and do not modify globals.
norm_sym(s)
Parameters:
s (string)
get_base_from_symbol(full_symbol)
Parameters:
full_symbol (string)
get_chain_for_token(token_symbol)
Parameters:
token_symbol (string)
get_top5_exchange_tickers_for_chain(chain_code)
Parameters:
chain_code (string)
get_top5_exchange_tickers_for_token(token_symbol)
Parameters:
token_symbol (string)
join_tickers(arr)
Parameters:
arr (array)
contains_symbol(arr, symbol)
Parameters:
arr (array)
symbol (string)
contains_current(arr)
Parameters:
arr (array)
get_arr_for_current_token()
get_chain_for_current()
Statistics
LibVPrfLibrary "LibVPrf"
This library provides an object-oriented framework for volume
profile analysis in Pine Script®. It is built around the `VProf`
User-Defined Type (UDT), which encapsulates all data, settings,
and statistical metrics for a single profile, enabling stateful
analysis with on-demand calculations.
Key Features:
1. **Object-Oriented Design (UDT):** The library is built around
the `VProf` UDT. This object encapsulates all profile data
and provides methods for its full lifecycle management,
including creation, cloning, clearing, and merging of profiles.
2. **Volume Allocation (`AllotMode`):** Offers two methods for
allocating a bar's volume:
- **Classic:** Assigns the entire bar's volume to the close
price bucket.
- **PDF:** Distributes volume across the bar's range using a
statistical price distribution model from the `LibBrSt` library.
3. **Buy/Sell Volume Splitting (`SplitMode`):** Provides methods
for classifying volume into buying and selling pressure:
- **Classic:** Classifies volume based on the bar's color (Close vs. Open).
- **Dynamic:** A specific model that analyzes candle structure
(body vs. wicks) and a short-term trend factor to
estimate the buy/sell share at each price level.
4. **Statistical Analysis (On-Demand):** Offers a suite of
statistical metrics calculated using a "Lazy Evaluation"
pattern (computed only when requested via `get...` methods):
- **Central Tendency:** Point of Control (POC), VWAP, and Median.
- **Dispersion:** Value Area (VA) and Population Standard Deviation.
- **Shape:** Skewness and Excess Kurtosis.
- **Delta:** Cumulative Volume Delta, including its
historical high/low watermarks.
5. **Structural Analysis:** Includes a parameter-free method
(`getSegments`) to decompose a profile into its fundamental
unimodal segments, allowing for modality detection (e.g.,
identifying bimodal profiles).
6. **Dynamic Profile Management:**
- **Auto-Fitting:** Profiles set to `dynamic = true` will
automatically expand their price range to fit new data.
- **Manipulation:** The resolution, price range, and Value Area
of a dynamic profile can be changed at any time. This
triggers a resampling process that uses a **linear
interpolation model** to re-bucket existing volume.
- **Assumption:** Non-dynamic profiles are fixed and will throw
a `runtime.error` if `addBar` is called with data
outside their initial range.
7. **Bucket-Level Access:** Provides getter methods for direct
iteration and analysis of the raw buy/sell volume and price
boundaries of each individual price bucket.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
create(buckets, rangeUp, rangeLo, dynamic, valueArea, allot, estimator, cdfSteps, split, trendLen)
Construct a new `VProf` object with fixed bucket count & range.
Parameters:
buckets (int) : series int number of price buckets ≥ 1
rangeUp (float) : series float upper price bound (absolute)
rangeLo (float) : series float lower price bound (absolute)
dynamic (bool) : series bool Flag for dynamic adaption of profile ranges
valueArea (int) : series int Percentage of total volume to include in the Value Area (1..100)
allot (series AllotMode) : series AllotMode Allocation mode `classic` or `pdf` (default `classic`)
estimator (series PriceEst enum from AustrianTradingMachine/LibBrSt/1) : series LibBrSt.PriceEst PDF model when `model == PDF`. (deflault = 'uniform')
cdfSteps (int) : series int even #sub-intervals for Simpson rule (default 20)
split (series SplitMode) : series SplitMode Buy/Sell determination (default `classic`)
trendLen (int) : series int Look‑back bars for trend factor (default 3)
Returns: VProf freshly initialised profile
method clone(self)
Create a deep copy of the volume profile.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object to copy
Returns: VProf A new, independent copy of the profile
method clear(self)
Reset all bucket tallies while keeping configuration intact.
Namespace types: VProf
Parameters:
self (VProf) : VProf profile object
Returns: VProf cleared profile (chaining)
method merge(self, srcABuy, srcASell, srcRangeUp, srcRangeLo, srcCvd, srcCvdHi, srcCvdLo)
Merges volume data from a source profile into the current profile.
If resizing is needed, it performs a high-fidelity re-bucketing of existing
volume using a linear interpolation model inferred from neighboring buckets,
preventing aliasing artifacts and ensuring accurate volume preservation.
Namespace types: VProf
Parameters:
self (VProf) : VProf The target profile object to merge into.
srcABuy (array) : array The source profile's buy volume bucket array.
srcASell (array) : array The source profile's sell volume bucket array.
srcRangeUp (float) : series float The upper price bound of the source profile.
srcRangeLo (float) : series float The lower price bound of the source profile.
srcCvd (float) : series float The final Cumulative Volume Delta (CVD) value of the source profile.
srcCvdHi (float) : series float The historical high-water mark of the CVD from the source profile.
srcCvdLo (float) : series float The historical low-water mark of the CVD from the source profile.
Returns: VProf `self` (chaining), now containing the merged data.
method addBar(self, offset)
Add current bar’s volume to the profile (call once per realtime bar).
classic mode: allocates all volume to the close bucket and classifies
by `close >= open`. PDF mode: distributes volume across buckets by the
estimator’s CDF mass. For `split = dynamic`, the buy/sell share per
price is computed via context-driven piecewise s(u).
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
offset (int) : series int To offset the calculated bar
Returns: VProf `self` (method chaining)
method setBuckets(self, buckets)
Sets the number of buckets for the volume profile.
Behavior depends on the `isDynamic` flag.
- If `dynamic = true`: Works on filled profiles by re-bucketing to a new resolution.
- If `dynamic = false`: Only works on empty profiles to prevent accidental changes.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
buckets (int) : series int The new number of buckets
Returns: VProf `self` (chaining)
method setRanges(self, rangeUp, rangeLo)
Sets the price range for the volume profile.
Behavior depends on the `dynamic` flag.
- If `dynamic = true`: Works on filled profiles by re-bucketing existing volume.
- If `dynamic = false`: Only works on empty profiles to prevent accidental changes.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
rangeUp (float) : series float The new upper price bound
rangeLo (float) : series float The new lower price bound
Returns: VProf `self` (chaining)
method setValueArea(self, valueArea)
Set the percentage of volume for the Value Area. If the value
changes, the profile is finalized again.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
valueArea (int) : series int The new Value Area percentage (0..100)
Returns: VProf `self` (chaining)
method getBktBuyVol(self, idx)
Get Buy volume of a bucket.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
idx (int) : series int Bucket index
Returns: series float Buy volume ≥ 0
method getBktSellVol(self, idx)
Get Sell volume of a bucket.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
idx (int) : series int Bucket index
Returns: series float Sell volume ≥ 0
method getBktBnds(self, idx)
Get Bounds of a bucket.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
idx (int) : series int Bucket index
Returns:
up series float The upper price bound of the bucket.
lo series float The lower price bound of the bucket.
method getPoc(self)
Get POC information.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
Returns:
pocIndex series int The index of the Point of Control (POC) bucket.
pocPrice. series float The mid-price of the Point of Control (POC) bucket.
method getVA(self)
Get Value Area (VA) information.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object
Returns:
vaUpIndex series int The index of the upper bound bucket of the Value Area.
vaUpPrice series float The upper price bound of the Value Area.
vaLoIndex series int The index of the lower bound bucket of the Value Area.
vaLoPrice series float The lower price bound of the Value Area.
method getMedian(self)
Get the profile's median price and its bucket index. Calculates the value on-demand if stale.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns:
medianIndex series int The index of the bucket containing the Median.
medianPrice series float The Median price of the profile.
method getVwap(self)
Get the profile's VWAP and its bucket index. Calculates the value on-demand if stale.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns:
vwapIndex series int The index of the bucket containing the VWAP.
vwapPrice series float The Volume Weighted Average Price of the profile.
method getStdDev(self)
Get the profile's volume-weighted standard deviation. Calculates the value on-demand if stale.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns: series float The Standard deviation of the profile.
method getSkewness(self)
Get the profile's skewness. Calculates the value on-demand if stale.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns: series float The Skewness of the profile.
method getKurtosis(self)
Get the profile's excess kurtosis. Calculates the value on-demand if stale.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns: series float The Kurtosis of the profile.
method getSegments(self)
Get the profile's fundamental unimodal segments. Calculates on-demand if stale.
Uses a parameter-free, pivot-based recursive algorithm.
Namespace types: VProf
Parameters:
self (VProf) : VProf The profile object.
Returns: matrix A 2-column matrix where each row is an pair.
method getCvd(self)
Cumulative Volume Delta (CVD) like metric over all buckets.
Namespace types: VProf
Parameters:
self (VProf) : VProf Profile object.
Returns:
cvd series float The final Cumulative Volume Delta (Total Buy Vol - Total Sell Vol).
cvdHi series float The running high-water mark of the CVD as volume was added.
cvdLo series float The running low-water mark of the CVD as volume was added.
VProf
VProf Bucketed Buy/Sell volume profile plus meta information.
Fields:
buckets (series int) : int Number of price buckets (granularity ≥1)
rangeUp (series float) : float Upper price range (absolute)
rangeLo (series float) : float Lower price range (absolute)
dynamic (series bool) : bool Flag for dynamic adaption of profile ranges
valueArea (series int) : int Percentage of total volume to include in the Value Area (1..100)
allot (series AllotMode) : AllotMode Allocation mode `classic` or `pdf`
estimator (series PriceEst enum from AustrianTradingMachine/LibBrSt/1) : LibBrSt.PriceEst Price density model when `model == PDF`
cdfSteps (series int) : int Simpson integration resolution (even ≥2)
split (series SplitMode) : SplitMode Buy/Sell split strategy per bar
trendLen (series int) : int Look‑back length for trend factor (≥1)
maxBkt (series int) : int User-defined number of buckets (unclamped)
aBuy (array) : array Buy volume per bucket
aSell (array) : array Sell volume per bucket
cvd (series float) : float Final Cumulative Volume Delta (Total Buy Vol - Total Sell Vol).
cvdHi (series float) : float Running high-water mark of the CVD as volume was added.
cvdLo (series float) : float Running low-water mark of the CVD as volume was added.
poc (series int) : int Index of max‑volume bucket (POC). Is `na` until calculated.
vaUp (series int) : int Index of upper Value‑Area bound. Is `na` until calculated.
vaLo (series int) : int Index of lower value‑Area bound. Is `na` until calculated.
median (series float) : float Median price of the volume distribution. Is `na` until calculated.
vwap (series float) : float Profile VWAP (Volume Weighted Average Price). Is `na` until calculated.
stdDev (series float) : float Standard Deviation of volume around the VWAP. Is `na` until calculated.
skewness (series float) : float Skewness of the volume distribution. Is `na` until calculated.
kurtosis (series float) : float Excess Kurtosis of the volume distribution. Is `na` until calculated.
segments (matrix) : matrix A 2-column matrix where each row is an pair. Is `na` until calculated.
LibBrStLibrary "LibBrSt"
This is a library for quantitative analysis, designed to estimate
the statistical properties of price movements *within* a single
OHLC bar, without requiring access to tick data. It provides a
suite of estimators based on various statistical and econometric
models, allowing for analysis of intra-bar volatility and
price distribution.
Key Capabilities:
1. **Price Distribution Models (`PriceEst`):** Provides a selection
of estimators that model intra-bar price action as a probability
distribution over the range. This allows for the
calculation of the intra-bar mean (`priceMean`) and standard
deviation (`priceStdDev`) in absolute price units. Models include:
- **Symmetric Models:** `uniform`, `triangular`, `arcsine`,
`betaSym`, and `t4Sym` (Student-t with fat tails).
- **Skewed Models:** `betaSkew` and `t4Skew`, which adjust
their shape based on the Open/Close position.
- **Model Assumptions:** The skewed models rely on specific
internal constants. `betaSkew` uses a fixed concentration
parameter (`BETA_SKEW_CONCENTRATION = 4.0`), and `t4Sym`/`t4Skew`
use a heuristic scaling factor (`T4_SHAPE_FACTOR`)
to map the distribution.
2. **Econometric Log-Return Estimators (`LogEst`):** Includes a set of
econometric estimators for calculating the volatility (`logStdDev`)
and drift (`logMean`) of logarithmic returns within a single bar.
These are unit-less measures. Models include:
- **Parkinson (1980):** A High-Low range estimator.
- **Garman-Klass (1980):** An OHLC-based estimator.
- **Rogers-Satchell (1991):** An OHLC estimator that accounts
for non-zero drift.
3. **Distribution Analysis (PDF/CDF):** Provides functions to work
with the Probability Density Function (`pricePdf`) and
Cumulative Distribution Function (`priceCdf`) of the
chosen price model.
- **Note on `priceCdf`:** This function uses analytical (exact)
calculations for the `uniform`, `triangular`, and `arcsine`
models. For all other models (e.g., `betaSkew`, `t4Skew`),
it uses **numerical integration (Simpson's rule)** as
an approximation of the cumulative probability.
4. **Mathematical Functions:** The library's Beta distribution
models (`betaSym`, `betaSkew`) are supported by an internal
implementation of the natural log-gamma function, which is
based on the Lanczos approximation.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
priceStdDev(estimator, offset)
Estimates **σ̂** (standard deviation) *in price units* for the current
bar, according to the chosen `PriceEst` distribution assumption.
Parameters:
estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
offset (int) : series int To offset the calculated bar
Returns: series float σ̂ ≥ 0 ; `na` if undefined (e.g. zero range).
priceMean(estimator, offset)
Estimates **μ̂** (mean price) for the chosen `PriceEst` within the
current bar.
Parameters:
estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
offset (int) : series int To offset the calculated bar
Returns: series float μ̂ in price units.
pricePdf(estimator, price, offset)
Probability-density under the chosen `PriceEst` model.
**Returns 0** when `p` is outside the current bar’s .
Parameters:
estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
price (float) : series float Price level to evaluate.
offset (int) : series int To offset the calculated bar
Returns: series float Density value.
priceCdf(estimator, upper, lower, steps, offset)
Cumulative probability **between** `upper` and `lower` under
the chosen `PriceEst` model. Outside-bar regions contribute zero.
Uses a fast, analytical calculation for Uniform, Triangular, and
Arcsine distributions, and defaults to numerical integration
(Simpson's rule) for more complex models.
Parameters:
estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
upper (float) : series float Upper Integration Boundary.
lower (float) : series float Lower Integration Boundary.
steps (int) : series int # of sub-intervals for numerical integration (if used).
offset (int) : series int To offset the calculated bar.
Returns: series float Probability mass ∈ .
logStdDev(estimator, offset)
Estimates **σ̂** (standard deviation) of *log-returns* for the current bar.
Parameters:
estimator (series LogEst) : series LogEst Distribution assumption (see enum).
offset (int) : series int To offset the calculated bar
Returns: series float σ̂ (unit-less); `na` if undefined.
logMean(estimator, offset)
Estimates μ̂ (mean log-return / drift) for the chosen `LogEst`.
The returned value is consistent with the assumptions of the
selected volatility estimator.
Parameters:
estimator (series LogEst) : series LogEst Distribution assumption (see enum).
offset (int) : series int To offset the calculated bar
Returns: series float μ̂ (unit-less log-return).
LibWghtLibrary "LibWght"
This is a library of mathematical and statistical functions
designed for quantitative analysis in Pine Script. Its core
principle is the integration of a custom weighting series
(e.g., volume) into a wide array of standard technical
analysis calculations.
Key Capabilities:
1. **Universal Weighting:** All exported functions accept a `weight`
parameter. This allows standard calculations (like moving
averages, RSI, and standard deviation) to be influenced by an
external data series, such as volume or tick count.
2. **Weighted Averages and Indicators:** Includes a comprehensive
collection of weighted functions:
- **Moving Averages:** `wSma`, `wEma`, `wWma`, `wRma` (Wilder's),
`wHma` (Hull), and `wLSma` (Least Squares / Linear Regression).
- **Oscillators & Ranges:** `wRsi`, `wAtr` (Average True Range),
`wTr` (True Range), and `wR` (High-Low Range).
3. **Volatility Decomposition:** Provides functions to decompose
total variance into distinct components for market analysis.
- **Two-Way Decomposition (`wTotVar`):** Separates variance into
**between-bar** (directional) and **within-bar** (noise)
components.
- **Three-Way Decomposition (`wLRTotVar`):** Decomposes variance
relative to a linear regression into **Trend** (explained by
the LR slope), **Residual** (mean-reversion around the
LR line), and **Within-Bar** (noise) components.
- **Local Volatility (`wLRLocTotStdDev`):** Measures the total
"noise" (within-bar + residual) around the trend line.
4. **Weighted Statistics and Regression:** Provides a robust
function for Weighted Linear Regression (`wLinReg`) and a
full suite of related statistical measures:
- **Between-Bar Stats:** `wBtwVar`, `wBtwStdDev`, `wBtwStdErr`.
- **Residual Stats:** `wResVar`, `wResStdDev`, `wResStdErr`.
5. **Fallback Mechanism:** All functions are designed for reliability.
If the total weight over the lookback period is zero (e.g., in
a no-volume period), the algorithms automatically fall back to
their unweighted, uniform-weight equivalents (e.g., `wSma`
becomes a standard `ta.sma`), preventing errors and ensuring
continuous calculation.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
wSma(source, weight, length)
Weighted Simple Moving Average (linear kernel).
Parameters:
source (float) : series float Data to average.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 1.
Returns: series float Linear-kernel weighted mean; falls back to
the arithmetic mean if Σweight = 0.
wEma(source, weight, length)
Weighted EMA (exponential kernel).
Parameters:
source (float) : series float Data to average.
weight (float) : series float Weight series.
length (simple int) : simple int Look-back length ≥ 1.
Returns: series float Exponential-kernel weighted mean; falls
back to classic EMA if Σweight = 0.
wWma(source, weight, length)
Weighted WMA (linear kernel).
Parameters:
source (float) : series float Data to average.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 1.
Returns: series float Linear-kernel weighted mean; falls back to
classic WMA if Σweight = 0.
wRma(source, weight, length)
Weighted RMA (Wilder kernel, α = 1/len).
Parameters:
source (float) : series float Data to average.
weight (float) : series float Weight series.
length (simple int) : simple int Look-back length ≥ 1.
Returns: series float Wilder-kernel weighted mean; falls back to
classic RMA if Σweight = 0.
wHma(source, weight, length)
Weighted HMA (linear kernel).
Parameters:
source (float) : series float Data to average.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 1.
Returns: series float Linear-kernel weighted mean; falls back to
classic HMA if Σweight = 0.
wRsi(source, weight, length)
Weighted Relative Strength Index.
Parameters:
source (float) : series float Price series.
weight (float) : series float Weight series.
length (simple int) : simple int Look-back length ≥ 1.
Returns: series float Weighted RSI; uniform if Σw = 0.
wAtr(tr, weight, length)
Weighted ATR (Average True Range).
Implemented as WRMA on *true range*.
Parameters:
tr (float) : series float True Range series.
weight (float) : series float Weight series.
length (simple int) : simple int Look-back length ≥ 1.
Returns: series float Weighted ATR; uniform weights if Σw = 0.
wTr(tr, weight, length)
Weighted True Range over a window.
Parameters:
tr (float) : series float True Range series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 1.
Returns: series float Weighted mean of TR; uniform if Σw = 0.
wR(r, weight, length)
Weighted High-Low Range over a window.
Parameters:
r (float) : series float High-Low per bar.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 1.
Returns: series float Weighted mean of range; uniform if Σw = 0.
wBtwVar(source, weight, length, biased)
Weighted Between Variance (biased/unbiased).
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns:
variance series float The calculated between-bar variance (σ²btw), either biased or unbiased.
sumW series float The sum of weights over the lookback period (Σw).
sumW2 series float The sum of squared weights over the lookback period (Σw²).
wBtwStdDev(source, weight, length, biased)
Weighted Between Standard Deviation.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float σbtw uniform if Σw = 0.
wBtwStdErr(source, weight, length, biased)
Weighted Between Standard Error.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float √(σ²btw / N_eff) uniform if Σw = 0.
wTotVar(mu, sigma, weight, length, biased)
Weighted Total Variance (= between-group + within-group).
Useful when each bar represents an aggregate with its own
mean* and pre-estimated σ (e.g., second-level ranges inside a
1-minute bar). Assumes the *weight* series applies to both the
group means and their σ estimates.
Parameters:
mu (float) : series float Group means (e.g., HL2 of 1-second bars).
sigma (float) : series float Pre-estimated σ of each group (same basis).
weight (float) : series float Weight series (volume, ticks, …).
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns:
varBtw series float The between-bar variance component (σ²btw).
varWtn series float The within-bar variance component (σ²wtn).
sumW series float The sum of weights over the lookback period (Σw).
sumW2 series float The sum of squared weights over the lookback period (Σw²).
wTotStdDev(mu, sigma, weight, length, biased)
Weighted Total Standard Deviation.
Parameters:
mu (float) : series float Group means (e.g., HL2 of 1-second bars).
sigma (float) : series float Pre-estimated σ of each group (same basis).
weight (float) : series float Weight series (volume, ticks, …).
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float σtot.
wTotStdErr(mu, sigma, weight, length, biased)
Weighted Total Standard Error.
SE = √( total variance / N_eff ) with the same effective sample
size logic as `wster()`.
Parameters:
mu (float) : series float Group means (e.g., HL2 of 1-second bars).
sigma (float) : series float Pre-estimated σ of each group (same basis).
weight (float) : series float Weight series (volume, ticks, …).
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float √(σ²tot / N_eff).
wLinReg(source, weight, length)
Weighted Linear Regression.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 2.
Returns:
mid series float The estimated value of the regression line at the most recent bar.
slope series float The slope of the regression line.
intercept series float The intercept of the regression line.
wResVar(source, weight, midLine, slope, length, biased)
Weighted Residual Variance.
linear regression – optionally biased (population) or
unbiased (sample).
Parameters:
source (float) : series float Data series.
weight (float) : series float Weighting series (volume, etc.).
midLine (float) : series float Regression value at the last bar.
slope (float) : series float Slope per bar.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population variance (σ²_P), denominator ≈ N_eff.
false → sample variance (σ²_S), denominator ≈ N_eff - 2.
(Adjusts for 2 degrees of freedom lost to the regression).
Returns:
variance series float The calculated residual variance (σ²res), either biased or unbiased.
sumW series float The sum of weights over the lookback period (Σw).
sumW2 series float The sum of squared weights over the lookback period (Σw²).
wResStdDev(source, weight, midLine, slope, length, biased)
Weighted Residual Standard Deviation.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
midLine (float) : series float Regression value at the last bar.
slope (float) : series float Slope per bar.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float σres; uniform if Σw = 0.
wResStdErr(source, weight, midLine, slope, length, biased)
Weighted Residual Standard Error.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
midLine (float) : series float Regression value at the last bar.
slope (float) : series float Slope per bar.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population (biased); false → sample.
Returns: series float √(σ²res / N_eff); uniform if Σw = 0.
wLRTotVar(mu, sigma, weight, midLine, slope, length, biased)
Weighted Linear-Regression Total Variance **around the
window’s weighted mean μ**.
σ²_tot = E_w ⟶ *within-group variance*
+ Var_w ⟶ *residual variance*
+ Var_w ⟶ *trend variance*
where each bar i in the look-back window contributes
m_i = *mean* (e.g. 1-sec HL2)
σ_i = *sigma* (pre-estimated intrabar σ)
w_i = *weight* (volume, ticks, …)
ŷ_i = b₀ + b₁·x (value of the weighted LR line)
r_i = m_i − ŷ_i (orthogonal residual)
Parameters:
mu (float) : series float Per-bar mean m_i.
sigma (float) : series float Pre-estimated σ_i of each bar.
weight (float) : series float Weight series w_i (≥ 0).
midLine (float) : series float Regression value at the latest bar (ŷₙ₋₁).
slope (float) : series float Slope b₁ of the regression line.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population; false → sample.
Returns:
varRes series float The residual variance component (σ²res).
varWtn series float The within-bar variance component (σ²wtn).
varTrd series float The trend variance component (σ²trd), explained by the linear regression.
sumW series float The sum of weights over the lookback period (Σw).
sumW2 series float The sum of squared weights over the lookback period (Σw²).
wLRTotStdDev(mu, sigma, weight, midLine, slope, length, biased)
Weighted Linear-Regression Total Standard Deviation.
Parameters:
mu (float) : series float Per-bar mean m_i.
sigma (float) : series float Pre-estimated σ_i of each bar.
weight (float) : series float Weight series w_i (≥ 0).
midLine (float) : series float Regression value at the latest bar (ŷₙ₋₁).
slope (float) : series float Slope b₁ of the regression line.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population; false → sample.
Returns: series float √(σ²tot).
wLRTotStdErr(mu, sigma, weight, midLine, slope, length, biased)
Weighted Linear-Regression Total Standard Error.
SE = √( σ²_tot / N_eff ) with N_eff = Σw² / Σw² (like in wster()).
Parameters:
mu (float) : series float Per-bar mean m_i.
sigma (float) : series float Pre-estimated σ_i of each bar.
weight (float) : series float Weight series w_i (≥ 0).
midLine (float) : series float Regression value at the latest bar (ŷₙ₋₁).
slope (float) : series float Slope b₁ of the regression line.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population; false → sample.
Returns: series float √((σ²res, σ²wtn, σ²trd) / N_eff).
wLRLocTotStdDev(mu, sigma, weight, midLine, slope, length, biased)
Weighted Linear-Regression Local Total Standard Deviation.
Measures the total "noise" (within-bar + residual) around the trend.
Parameters:
mu (float) : series float Per-bar mean m_i.
sigma (float) : series float Pre-estimated σ_i of each bar.
weight (float) : series float Weight series w_i (≥ 0).
midLine (float) : series float Regression value at the latest bar (ŷₙ₋₁).
slope (float) : series float Slope b₁ of the regression line.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population; false → sample.
Returns: series float √(σ²wtn + σ²res).
wLRLocTotStdErr(mu, sigma, weight, midLine, slope, length, biased)
Weighted Linear-Regression Local Total Standard Error.
Parameters:
mu (float) : series float Per-bar mean m_i.
sigma (float) : series float Pre-estimated σ_i of each bar.
weight (float) : series float Weight series w_i (≥ 0).
midLine (float) : series float Regression value at the latest bar (ŷₙ₋₁).
slope (float) : series float Slope b₁ of the regression line.
length (int) : series int Look-back length ≥ 2.
biased (bool) : series bool true → population; false → sample.
Returns: series float √((σ²wtn + σ²res) / N_eff).
wLSma(source, weight, length)
Weighted Least Square Moving Average.
Parameters:
source (float) : series float Data series.
weight (float) : series float Weight series.
length (int) : series int Look-back length ≥ 2.
Returns: series float Least square weighted mean. Falls back
to unweighted regression if Σw = 0.
Mirpapa_Lib_boxLibrary "Mirpapa_Lib_box"
AddFVG(boxes, htfTimeframe, htfBarIndex, top, bottom, isBull, _text)
AddFVG
@description FVG 박스 데이터 추가
Parameters:
boxes (array) : array 박스 배열
htfTimeframe (string) : string HTF 시간대 ("60", "240", "D")
htfBarIndex (int) : int HTF bar_index
top (float) : float 상단 가격
bottom (float) : float 하단 가격
isBull (bool) : bool 방향 (true=상승, false=하락)
_text (string)
Returns: void
AddOB(boxes, htfTimeframe, htfBarIndex, top, bottom, isBull, _text)
AddOB
@description OB 박스 데이터 추가
Parameters:
boxes (array) : array 박스 배열
htfTimeframe (string) : string HTF 시간대
htfBarIndex (int) : int HTF bar_index
top (float) : float 상단 가격
bottom (float) : float 하단 가격
isBull (bool) : bool 방향
_text (string)
Returns: void
AddBB(boxes, htfTimeframe, htfBarIndex, top, bottom, isBull, _text)
AddBB
@description BB 박스 데이터 추가
Parameters:
boxes (array) : array 박스 배열
htfTimeframe (string) : string HTF 시간대
htfBarIndex (int) : int HTF bar_index
top (float) : float 상단 가격
bottom (float) : float 하단 가격
isBull (bool) : bool 방향
_text (string)
Returns: void
AddRB(boxes, htfTimeframe, htfBarIndex, top, bottom, isBull, _text)
AddRB
@description RB 박스 데이터 추가
Parameters:
boxes (array) : array 박스 배열
htfTimeframe (string) : string HTF 시간대
htfBarIndex (int) : int HTF bar_index
top (float) : float 상단 가격
bottom (float) : float 하단 가격
isBull (bool) : bool 방향
_text (string)
Returns: void
ProcessBoxes(boxes, boxType, colorBull, colorBear, closeCount, useLine, textAlignH, textAlignV, closeColor)
ProcessBoxes
@description 박스 배열 처리 (생성→확장→터치→종료)
Parameters:
boxes (array) : array 박스 배열
boxType (string) : string 박스 타입 ("FVG", "OB", "BB", "RB")
colorBull (color) : color 상승 색상
colorBear (color) : color 하락 색상
closeCount (int) : int 터치 종료 횟수
useLine (bool) : bool 중간라인 사용 여부
textAlignH (string) : string 수평 정렬
textAlignV (string) : string 수직 정렬
closeColor (color) : color 종료 색상
Returns: void
GetActiveBoxCount(boxes)
GetActiveBoxCount
@description 활성 박스 개수 반환
Parameters:
boxes (array) : array 박스 배열
Returns: int 활성 박스 개수
ClearInactiveBoxes(boxes)
ClearInactiveBoxes
@description 비활성 박스 제거 (메모리 절약)
Parameters:
boxes (array) : array 박스 배열
Returns: void
BoxData
BoxData
Fields:
_isActive (series bool) : 박스 활성화 상태
_isBull (series bool) : 방향 (true=상승, false=하락)
_boxTop (series float) : 상단 가격
_boxBot (series float) : 하단 가격
_basePoint (series float) : 터치 감지 기준점
_stage (series int) : 터치 횟수 카운터
_type (series string) : 박스 타입 ("FVG", "OB", "BB", "RB")
_htfTimeframe (series string) : HTF 시간대 ("60", "240", "D")
_htfBarIndex (series int) : HTF 기준 bar_index
_text (series string) : 사용자 추가 텍스트
_box (series box) : 박스 객체 (ProcessBoxes에서 생성)
_line (series line) : 라인 객체 (ProcessBoxes에서 생성)
Mirpapa_Lib_DivergenceLibrary "Mirpapa_Lib_Divergence"
다이버전스 감지 및 시각화 라이브러리 (범용 설계)
newPivot(bar, priceVal, indVal)
피벗 포인트 생성
Parameters:
bar (int) : 바 인덱스
priceVal (float) : 가격
indVal (float) : 지표값
Returns: PivotPoint
newDivSettings(pivotLen, maxStore, maxShow)
다이버전스 설정 생성
Parameters:
pivotLen (int) : 피벗 좌우 캔들
maxStore (int) : 저장 개수
maxShow (int) : 표시 라인 개수
Returns: DivergenceSettings
emptyDivResult()
빈 다이버전스 결과
Returns: 감지 안 된 DivergenceResult
checkPivotHigh(length, source)
고점 피벗 감지
Parameters:
length (int) : 좌우 비교 캔들 수
source (float) : 비교할 데이터 (지표값)
Returns: 피벗 값 또는 na
checkPivotLow(length, source)
저점 피벗 감지
Parameters:
length (int) : 좌우 비교 캔들 수
source (float) : 비교할 데이터 (지표값)
Returns: 피벗 값 또는 na
addPivotToArray(pivotArray, pivot, maxSize)
피벗을 배열에 추가 (FIFO 방식)
Parameters:
pivotArray (array) : 피벗 배열
pivot (PivotPoint) : 추가할 피벗
maxSize (int) : 최대 크기
checkBullishDivergence(pivotArray)
상승 다이버전스 체크 (Bullish)
Parameters:
pivotArray (array) : 저점 피벗 배열
Returns: DivergenceResult
checkBearishDivergence(pivotArray)
하락 다이버전스 체크 (Bearish)
Parameters:
pivotArray (array) : 고점 피벗 배열
Returns: DivergenceResult
createDivLine(result, lineColor, isOverlay)
다이버전스 라인 생성
Parameters:
result (DivergenceResult) : DivergenceResult
lineColor (color) : 라인 색상
isOverlay (bool) : true면 가격 기준, false면 지표 기준
Returns:
cleanupLines(lineArray, labelArray, maxLines)
오래된 라인/라벨 정리
Parameters:
lineArray (array) : 라인 배열
labelArray (array) : 라벨 배열
maxLines (int) : 최대 유지 개수
addLineAndCleanup(lineArray, labelArray, newLine, newLabel, maxLines)
라인/라벨 추가 및 자동 정리
Parameters:
lineArray (array) : 라인 배열
labelArray (array) : 라벨 배열
newLine (line) : 새 라인
newLabel (label) : 새 라벨
maxLines (int) : 최대 개수
PivotPoint
피벗 데이터 저장
Fields:
barIndex (series int) : 바 인덱스
price (series float) : 종가
indicatorValue (series float) : 지표값
DivergenceSettings
다이버전스 설정
Fields:
pivotLength (series int) : 피벗 좌우 캔들 수
maxPivotsStore (series int) : 저장할 최대 피벗 개수
maxLinesShow (series int) : 표시할 최대 라인 개수
DivergenceResult
다이버전스 감지 결과
Fields:
detected (series bool) : 다이버전스 감지 여부
isBullish (series bool) : true면 상승, false면 하락
bar1 (series int) : 첫 번째 피벗 바 인덱스
value1_price (series float) : 첫 번째 가격
value1_ind (series float) : 첫 번째 지표값
bar2 (series int) : 두 번째 피벗 바 인덱스
value2_price (series float) : 두 번째 가격
value2_ind (series float) : 두 번째 지표값
Mirpapa_Lib_MACDLibrary "Mirpapa_Lib_MACD"
MACD 계산 및 크로스 체크를 위한 라이브러리
calc_smma(src, len)
SMMA (Smoothed Moving Average) 계산
Parameters:
src (float) : 소스 데이터
len (simple int) : 길이
Returns: SMMA 값
calc_zlema(src, length)
ZLEMA (Zero Lag EMA) 계산
Parameters:
src (float) : 소스 데이터
length (simple int) : 길이
Returns: ZLEMA 값
checkMacdCross(lengthMA, lengthSignal, src, enabled)
MACD 크로스오버 체크
Parameters:
lengthMA (simple int) : MACD 길이
lengthSignal (int) : 시그널 길이
src (float) : 소스 (기본값: hlc3)
enabled (bool) : 계산 활성화 여부 (기본값: true)
Returns:
LogNormalLibrary "LogNormal"
A collection of functions used to model skewed distributions as log-normal.
Prices are commonly modeled using log-normal distributions (ie. Black-Scholes) because they exhibit multiplicative changes with long tails; skewed exponential growth and high variance. This approach is particularly useful for understanding price behavior and estimating risk, assuming continuously compounding returns are normally distributed.
Because log space analysis is not as direct as using math.log(price) , this library extends the Error Functions library to make working with log-normally distributed data as simple as possible.
- - -
QUICK START
Import library into your project
Initialize model with a mean and standard deviation
Pass model params between methods to compute various properties
var LogNorm model = LN.init(arr.avg(), arr.stdev()) // Assumes the library is imported as LN
var mode = model.mode()
Outputs from the model can be adjusted to better fit the data.
var Quantile data = arr.quantiles()
var more_accurate_mode = mode.fit(model, data) // Fits value from model to data
Inputs to the model can also be adjusted to better fit the data.
datum = 123.45
model_equivalent_datum = datum.fit(data, model) // Fits value from data to the model
area_from_zero_to_datum = model.cdf(model_equivalent_datum)
- - -
TYPES
There are two requisite UDTs: LogNorm and Quantile . They are used to pass parameters between functions and are set automatically (see Type Management ).
LogNorm
Object for log space parameters and linear space quantiles .
Fields:
mu (float) : Log space mu ( µ ).
sigma (float) : Log space sigma ( σ ).
variance (float) : Log space variance ( σ² ).
quantiles (Quantile) : Linear space quantiles.
Quantile
Object for linear quantiles, most similar to a seven-number summary .
Fields:
Q0 (float) : Smallest Value
LW (float) : Lower Whisker Endpoint
LC (float) : Lower Whisker Crosshatch
Q1 (float) : First Quartile
Q2 (float) : Second Quartile
Q3 (float) : Third Quartile
UC (float) : Upper Whisker Crosshatch
UW (float) : Upper Whisker Endpoint
Q4 (float) : Largest Value
IQR (float) : Interquartile Range
MH (float) : Midhinge
TM (float) : Trimean
MR (float) : Mid-Range
- - -
TYPE MANAGEMENT
These functions reliably initialize and update the UDTs. Because parameterization is interdependent, avoid setting the LogNorm and Quantile fields directly .
init(mean, stdev, variance)
Initializes a LogNorm object.
Parameters:
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
set(ln, mean, stdev, variance)
Transforms linear measurements into log space parameters for a LogNorm object.
Parameters:
ln (LogNorm) : Object containing log space parameters.
mean (float) : Linearly measured mean.
stdev (float) : Linearly measured standard deviation.
variance (float) : Linearly measured variance.
Returns: LogNorm Object
quantiles(arr)
Gets empirical quantiles from an array of floats.
Parameters:
arr (array) : Float array object.
Returns: Quantile Object
- - -
DESCRIPTIVE STATISTICS
Using only the initialized LogNorm parameters, these functions compute a model's central tendency and standardized moments.
mean(ln)
Computes the linear mean from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
median(ln)
Computes the linear median from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
mode(ln)
Computes the linear mode from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
variance(ln)
Computes the linear variance from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
skewness(ln)
Computes the linear skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
kurtosis(ln, excess)
Computes the linear kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Kurtosis (true) or regular Kurtosis (false).
Returns: Between 0 and ∞
hyper_skewness(ln)
Computes the linear hyper skewness from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
Returns: Between 0 and ∞
hyper_kurtosis(ln, excess)
Computes the linear hyper kurtosis from log space parameters.
Parameters:
ln (LogNorm) : Object containing log space parameters.
excess (bool) : Excess Hyper Kurtosis (true) or regular Hyper Kurtosis (false).
Returns: Between 0 and ∞
- - -
DISTRIBUTION FUNCTIONS
These wrap Gaussian functions to make working with model space more direct. Because they are contained within a log-normal library, they describe estimations relative to a log-normal curve, even though they fundamentally measure a Gaussian curve.
pdf(ln, x, empirical_quantiles)
A Probability Density Function estimates the probability density . For clarity, density is not a probability .
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate for which a density will be estimated.
empirical_quantiles (Quantile) : Quantiles as observed in the data (optional).
Returns: Between 0 and ∞
cdf(ln, x, precise)
A Cumulative Distribution Function estimates the area under a Log-Normal curve between Zero and a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ccdf(ln, x, precise)
A Complementary Cumulative Distribution Function estimates the area under a Log-Normal curve between a linear X coordinate and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
cdfinv(ln, a, precise)
An Inverse Cumulative Distribution Function reverses the Log-Normal cdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
ccdfinv(ln, a, precise)
An Inverse Complementary Cumulative Distribution Function reverses the Log-Normal ccdf() by estimating the linear X coordinate from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
cdfab(ln, x1, x2, precise)
A Cumulative Distribution Function from A to B estimates the area under a Log-Normal curve between two linear X coordinates (A and B).
Parameters:
ln (LogNorm) : Object of log space parameters.
x1 (float) : First linear X coordinate .
x2 (float) : Second linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ott(ln, x, precise)
A One-Tailed Test transforms a linear X coordinate into an absolute Z Score before estimating the area under a Log-Normal curve between Z and Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 0.5
ttt(ln, x, precise)
A Two-Tailed Test transforms a linear X coordinate into symmetrical ± Z Scores before estimating the area under a Log-Normal curve from Zero to -Z, and +Z to Infinity.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
ottinv(ln, a, precise)
An Inverse One-Tailed Test reverses the Log-Normal ott() by estimating a linear X coordinate for the right tail from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Half a normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
tttinv(ln, a, precise)
An Inverse Two-Tailed Test reverses the Log-Normal ttt() by estimating two linear X coordinates from an area.
Parameters:
ln (LogNorm) : Object of log space parameters.
a (float) : Normalized area .
precise (bool) : Double precision (true) or single precision (false).
Returns: Linear space tuple :
- - -
UNCERTAINTY
Model-based measures of uncertainty, information, and risk.
sterr(sample_size, fisher_info)
The standard error of a sample statistic.
Parameters:
sample_size (float) : Number of observations.
fisher_info (float) : Fisher information.
Returns: Between 0 and ∞
surprisal(p, base)
Quantifies the information content of a single event.
Parameters:
p (float) : Probability of the event .
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
entropy(ln, base)
Computes the differential entropy (average surprisal).
Parameters:
ln (LogNorm) : Object of log space parameters.
base (float) : Logarithmic base (optional).
Returns: Between 0 and ∞
perplexity(ln, base)
Computes the average number of distinguishable outcomes from the entropy.
Parameters:
ln (LogNorm)
base (float) : Logarithmic base used for Entropy (optional).
Returns: Between 0 and ∞
value_at_risk(ln, p, precise)
Estimates a risk threshold under normal market conditions for a given confidence level.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
value_at_risk_inv(ln, value_at_risk, precise)
Reverses the value_at_risk() by estimating the confidence level from the risk threshold.
Parameters:
ln (LogNorm) : Object of log space parameters.
value_at_risk (float) : Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
conditional_value_at_risk(ln, p, precise)
Estimates the average loss beyond a confidence level, aka. expected shortfall.
Parameters:
ln (LogNorm) : Object of log space parameters.
p (float) : Probability threshold, aka. the confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_value_at_risk_inv(ln, conditional_value_at_risk, precise)
Reverses the conditional_value_at_risk() by estimating the confidence level of an average loss.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_value_at_risk (float) : Conditional Value at Risk.
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and 1
partial_expectation(ln, x, precise)
Estimates the partial expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and µ
partial_expectation_inv(ln, partial_expectation, precise)
Reverses the partial_expectation() by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
partial_expectation (float) : Partial Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
conditional_expectation(ln, x, precise)
Estimates the conditional expectation of a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between X and ∞
conditional_expectation_inv(ln, conditional_expectation, precise)
Reverses the conditional_expectation by estimating a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
conditional_expectation (float) : Conditional Expectation .
precise (bool) : Double precision (true) or single precision (false).
Returns: Between 0 and ∞
fisher(ln, log)
Computes the Fisher Information Matrix for the distribution, not a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the distribution
fisher(ln, x, log)
Computes the Fisher Information Matrix for a linear X coordinate, not the distribution itself.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
log (bool) : Sets if the matrix should be in log (true) or linear (false) space.
Returns: FIM for the linear X coordinate
confidence_interval(ln, x, sample_size, confidence, precise)
Estimates a confidence interval for a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate .
sample_size (float) : Number of observations.
confidence (float) : Confidence level .
precise (bool) : Double precision (true) or single precision (false).
Returns: CI for the linear X coordinate
- - -
CURVE FITTING
An overloaded function that helps transform values between spaces. The primary function uses quantiles, and the overloads wrap the primary function to make working with LogNorm more direct.
fit(x, a, b)
Transforms X coordinate between spaces A and B.
Parameters:
x (float) : Linear X coordinate from space A .
a (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
b (LogNorm | Quantile | array) : LogNorm, Quantile, or float array.
Returns: Adjusted X coordinate
- - -
EXPORTED HELPERS
Small utilities to simplify extensibility.
z_score(ln, x)
Converts a linear X coordinate into a Z Score.
Parameters:
ln (LogNorm) : Object of log space parameters.
x (float) : Linear X coordinate.
Returns: Between -∞ and +∞
x_coord(ln, z)
Converts a Z Score into a linear X coordinate.
Parameters:
ln (LogNorm) : Object of log space parameters.
z (float) : Standard normal Z Score.
Returns: Between 0 and ∞
iget(arr, index)
Gets an interpolated value of a pseudo -element (fictional element between real array elements). Useful for quantile mapping.
Parameters:
arr (array) : Float array object.
index (float) : Index of the pseudo element.
Returns: Interpolated value of the arrays pseudo element.
ICOptimizerLibrary "ICOptimizer"
Library for IC-based parameter optimization
findOptimalParam(testParams, icValues, currentParam, smoothing)
Find optimal parameter from array of IC values
Parameters:
testParams (array) : Array of parameter values being tested
icValues (array) : Array of IC values for each parameter (same size as testParams)
currentParam (float) : Current parameter value (for smoothing)
smoothing (simple float) : Smoothing factor (0-1, e.g., 0.2 means 20% new, 80% old)
Returns: New parameter value, its IC, and array index
adaptiveParamWithStarvation(opt, testParams, icValues, smoothing, starvationThreshold, starvationJumpSize)
Adaptive parameter selection with starvation handling
Parameters:
opt (ICOptimizer) : ICOptimizer object
testParams (array) : Array of parameter values
icValues (array) : Array of IC values for each parameter
smoothing (simple float) : Normal smoothing factor
starvationThreshold (simple int) : Number of updates before triggering starvation mode
starvationJumpSize (simple float) : Jump size when in starvation (as fraction of range)
Returns: Updated parameter and IC
detectAndAdjustDomination(longCount, shortCount, currentLongLevel, currentShortLevel, dominationRatio, jumpSize, minLevel, maxLevel)
Detect signal imbalance and adjust parameters
Parameters:
longCount (int) : Number of long signals in period
shortCount (int) : Number of short signals in period
currentLongLevel (float) : Current long threshold
currentShortLevel (float) : Current short threshold
dominationRatio (simple int) : Ratio threshold (e.g., 4 = 4:1 imbalance)
jumpSize (simple float) : Size of adjustment
minLevel (simple float) : Minimum allowed level
maxLevel (simple float) : Maximum allowed level
Returns:
calcIC(signals, returns, lookback)
Parameters:
signals (float)
returns (float)
lookback (simple int)
classifyIC(currentIC, icWindow, goodPercentile, badPercentile)
Parameters:
currentIC (float)
icWindow (simple int)
goodPercentile (simple int)
badPercentile (simple int)
evaluateSignal(signal, forwardReturn)
Parameters:
signal (float)
forwardReturn (float)
updateOptimizerState(opt, signal, forwardReturn, currentIC, metaICPeriod)
Parameters:
opt (ICOptimizer)
signal (float)
forwardReturn (float)
currentIC (float)
metaICPeriod (simple int)
calcSuccessRate(successful, total)
Parameters:
successful (int)
total (int)
createICStatsTable(opt, paramName, normalSuccess, normalTotal)
Parameters:
opt (ICOptimizer)
paramName (string)
normalSuccess (int)
normalTotal (int)
initOptimizer(initialParam)
Parameters:
initialParam (float)
ICOptimizer
Fields:
currentParam (series float)
currentIC (series float)
metaIC (series float)
totalSignals (series int)
successfulSignals (series int)
goodICSignals (series int)
goodICSuccess (series int)
nonBadICSignals (series int)
nonBadICSuccess (series int)
goodICThreshold (series float)
badICThreshold (series float)
updateCounter (series int)
IC optimiser libLibrary "IC optimiser lib"
Library for IC-based parameter optimization
findOptimalParam(testParams, icValues, currentParam, smoothing)
Find optimal parameter from array of IC values
Parameters:
testParams (array) : Array of parameter values being tested
icValues (array) : Array of IC values for each parameter (same size as testParams)
currentParam (float) : Current parameter value (for smoothing)
smoothing (simple float) : Smoothing factor (0-1, e.g., 0.2 means 20% new, 80% old)
Returns: New parameter value, its IC, and array index
adaptiveParamWithStarvation(opt, testParams, icValues, smoothing, starvationThreshold, starvationJumpSize)
Adaptive parameter selection with starvation handling
Parameters:
opt (ICOptimizer) : ICOptimizer object
testParams (array) : Array of parameter values
icValues (array) : Array of IC values for each parameter
smoothing (simple float) : Normal smoothing factor
starvationThreshold (simple int) : Number of updates before triggering starvation mode
starvationJumpSize (simple float) : Jump size when in starvation (as fraction of range)
Returns: Updated parameter and IC
detectAndAdjustDomination(longCount, shortCount, currentLongLevel, currentShortLevel, dominationRatio, jumpSize, minLevel, maxLevel)
Detect signal imbalance and adjust parameters
Parameters:
longCount (int) : Number of long signals in period
shortCount (int) : Number of short signals in period
currentLongLevel (float) : Current long threshold
currentShortLevel (float) : Current short threshold
dominationRatio (simple int) : Ratio threshold (e.g., 4 = 4:1 imbalance)
jumpSize (simple float) : Size of adjustment
minLevel (simple float) : Minimum allowed level
maxLevel (simple float) : Maximum allowed level
Returns:
calcIC(signals, returns, lookback)
Parameters:
signals (float)
returns (float)
lookback (simple int)
classifyIC(currentIC, icWindow, goodPercentile, badPercentile)
Parameters:
currentIC (float)
icWindow (simple int)
goodPercentile (simple int)
badPercentile (simple int)
evaluateSignal(signal, forwardReturn)
Parameters:
signal (float)
forwardReturn (float)
updateOptimizerState(opt, signal, forwardReturn, currentIC, metaICPeriod)
Parameters:
opt (ICOptimizer)
signal (float)
forwardReturn (float)
currentIC (float)
metaICPeriod (simple int)
calcSuccessRate(successful, total)
Parameters:
successful (int)
total (int)
createICStatsTable(opt, paramName, normalSuccess, normalTotal)
Parameters:
opt (ICOptimizer)
paramName (string)
normalSuccess (int)
normalTotal (int)
initOptimizer(initialParam)
Parameters:
initialParam (float)
ICOptimizer
Fields:
currentParam (series float)
currentIC (series float)
metaIC (series float)
totalSignals (series int)
successfulSignals (series int)
goodICSignals (series int)
goodICSuccess (series int)
nonBadICSignals (series int)
nonBadICSuccess (series int)
goodICThreshold (series float)
badICThreshold (series float)
updateCounter (series int)
UTBotLibrary "UTBot"
is a powerful and flexible trading toolkit implemented in Pine Script. Based on the widely recognized UT Bot strategy originally developed by Yo_adriiiiaan with important enhancements by HPotter, this library provides users with customizable functions for dynamic trailing stop calculations using ATR (Average True Range), trend detection, and signal generation. It enables developers and traders to seamlessly integrate UT Bot logic into their own indicators and strategies without duplicating code.
Key features include:
Accurate ATR-based trailing stop and reversal detection
Multi-timeframe support for enhanced signal reliability
Clean and efficient API for easy integration and customization
Detailed documentation and examples for quick adoption
Open-source and community-friendly, encouraging collaboration and improvements
We sincerely thank Yo_adriiiiaan for the original UT Bot concept and HPotter for valuable improvements that have made this strategy even more robust. This library aims to honor their work by making the UT Bot methodology accessible to Pine Script developers worldwide.
This library is designed for Pine Script programmers looking to leverage the proven UT Bot methodology to build robust trading systems with minimal effort and maximum maintainability.
UTBot(h, l, c, multi, leng)
Parameters:
h (float) - high
l (float) - low
c (float)-close
multi (float)- multi for ATR
leng (int)-length for ATR
Returns:
xATRTS - ATR Based TrailingStop Value
pos - pos==1, long position, pos==-1, shot position
signal - 0 no signal, 1 buy, -1 sell
JK_Traders_Reality_LibLibrary "JK_Traders_Reality_Lib"
This library contains common elements used in Traders Reality scripts
calcPvsra(pvsraVolume, pvsraHigh, pvsraLow, pvsraClose, pvsraOpen, redVectorColor, greenVectorColor, violetVectorColor, blueVectorColor, darkGreyCandleColor, lightGrayCandleColor)
calculate the pvsra candle color and return the color as well as an alert if a vector candle has apperared.
Situation "Climax"
Bars with volume >= 200% of the average volume of the 10 previous chart TFs, or bars
where the product of candle spread x candle volume is >= the highest for the 10 previous
chart time TFs.
Default Colors: Bull bars are green and bear bars are red.
Situation "Volume Rising Above Average"
Bars with volume >= 150% of the average volume of the 10 previous chart TFs.
Default Colors: Bull bars are blue and bear are violet.
Parameters:
pvsraVolume (float) : the instrument volume series (obtained from request.sequrity)
pvsraHigh (float) : the instrument high series (obtained from request.sequrity)
pvsraLow (float) : the instrument low series (obtained from request.sequrity)
pvsraClose (float) : the instrument close series (obtained from request.sequrity)
pvsraOpen (float) : the instrument open series (obtained from request.sequrity)
redVectorColor (simple color) : red vector candle color
greenVectorColor (simple color) : green vector candle color
violetVectorColor (simple color) : violet/pink vector candle color
blueVectorColor (simple color) : blue vector candle color
darkGreyCandleColor (simple color) : regular volume candle down candle color - not a vector
lightGrayCandleColor (simple color) : regular volume candle up candle color - not a vector
@return
adr(length, barsBack)
Parameters:
length (simple int) : how many elements of the series to calculate on
barsBack (simple int) : starting possition for the length calculation - current bar or some other value eg last bar
@return adr the adr for the specified lenght
adrHigh(adr, fromDo)
Calculate the ADR high given an ADR
Parameters:
adr (float) : the adr
fromDo (simple bool) : boolean flag, if false calculate traditional adr from high low of today, if true calcualte from exchange midnight
@return adrHigh the position of the adr high in price
adrLow(adr, fromDo)
Parameters:
adr (float) : the adr
fromDo (simple bool) : boolean flag, if false calculate traditional adr from high low of today, if true calcualte from exchange midnight
@return adrLow the position of the adr low in price
splitSessionString(sessXTime)
given a session in the format 0000-0100:23456 split out the hours and minutes
Parameters:
sessXTime (simple string) : the session time string usually in the format 0000-0100:23456
@return
calcSessionStartEnd(sessXTime, gmt)
calculate the start and end timestamps of the session
Parameters:
sessXTime (simple string) : the session time string usually in the format 0000-0100:23456
gmt (simple string) : the gmt offset string usually in the format GMT+1 or GMT+2 etc
@return
drawOpenRange(sessXTime, sessXcol, showOrX, gmt)
draw open range for a session
Parameters:
sessXTime (simple string) : session string in the format 0000-0100:23456
sessXcol (simple color) : the color to be used for the opening range box shading
showOrX (simple bool) : boolean flag to toggle displaying the opening range
gmt (simple string) : the gmt offset string usually in the format GMT+1 or GMT+2 etc
@return void
drawSessionHiLo(sessXTime, showRectangleX, showLabelX, sessXcolLabel, sessXLabel, gmt, sessionLineStyle)
Parameters:
sessXTime (simple string) : session string in the format 0000-0100:23456
showRectangleX (simple bool)
showLabelX (simple bool)
sessXcolLabel (simple color) : the color to be used for the hi/low lines and label
sessXLabel (simple string) : the session label text
gmt (simple string) : the gmt offset string usually in the format GMT+1 or GMT+2 etc
sessionLineStyle (simple string) : the line stile for the session high low lines
@return void
calcDst()
calculate market session dst on/off flags
@return indicating if DST is on or off for a particular region
timestampPreviousDayOfWeek(previousDayOfWeek, hourOfDay, gmtOffset, oneWeekMillis)
Timestamp any of the 6 previous days in the week (such as last Wednesday at 21 hours GMT)
Parameters:
previousDayOfWeek (simple string) : Monday or Satruday
hourOfDay (simple int) : the hour of the day when psy calc is to start
gmtOffset (simple string) : the gmt offset string usually in the format GMT+1 or GMT+2 etc
oneWeekMillis (simple int) : the amount if time for a week in milliseconds
@return the timestamp of the psy level calculation start time
getdayOpen()
get the daily open - basically exchange midnight
@return the daily open value which is float price
newBar(res)
new_bar: check if we're on a new bar within the session in a given resolution
Parameters:
res (simple string) : the desired resolution
@return true/false is a new bar for the session has started
toPips(val)
to_pips Convert value to pips
Parameters:
val (float) : the value to convert to pips
@return the value in pips
rLabel(ry, rtext, rstyle, rcolor, valid, labelXOffset)
a function that draws a right aligned lable for a series during the current bar
Parameters:
ry (float) : series float the y coordinate of the lable
rtext (simple string) : the text of the label
rstyle (simple string) : the style for the lable
rcolor (simple color) : the color for the label
valid (simple bool) : a boolean flag that allows for turning on or off a lable
labelXOffset (int) : how much to offset the label from the current position
rLabelOffset(ry, rtext, rstyle, rcolor, valid, labelOffset)
a function that draws a right aligned lable for a series during the current bar
Parameters:
ry (float) : series float the y coordinate of the lable
rtext (string) : the text of the label
rstyle (simple string) : the style for the lable
rcolor (simple color) : the color for the label
valid (simple bool) : a boolean flag that allows for turning on or off a lable
labelOffset (int)
rLabelLastBar(ry, rtext, rstyle, rcolor, valid, labelXOffset)
a function that draws a right aligned lable for a series only on the last bar
Parameters:
ry (float) : series float the y coordinate of the lable
rtext (string) : the text of the label
rstyle (simple string) : the style for the lable
rcolor (simple color) : the color for the label
valid (simple bool) : a boolean flag that allows for turning on or off a lable
labelXOffset (int) : how much to offset the label from the current position
drawLine(xSeries, res, tag, xColor, xStyle, xWidth, xExtend, isLabelValid, xLabelOffset, validTimeFrame)
a function that draws a line and a label for a series
Parameters:
xSeries (float) : series float the y coordinate of the line/label
res (simple string) : the desired resolution controlling when a new line will start
tag (simple string) : the text for the lable
xColor (simple color) : the color for the label
xStyle (simple string) : the style for the line
xWidth (simple int) : the width of the line
xExtend (simple string) : extend the line
isLabelValid (simple bool) : a boolean flag that allows for turning on or off a label
xLabelOffset (int)
validTimeFrame (simple bool) : a boolean flag that allows for turning on or off a line drawn
drawLineDO(xSeries, res, tag, xColor, xStyle, xWidth, xExtend, isLabelValid, xLabelOffset, validTimeFrame)
a function that draws a line and a label for the daily open series
Parameters:
xSeries (float) : series float the y coordinate of the line/label
res (simple string) : the desired resolution controlling when a new line will start
tag (simple string) : the text for the lable
xColor (simple color) : the color for the label
xStyle (simple string) : the style for the line
xWidth (simple int) : the width of the line
xExtend (simple string) : extend the line
isLabelValid (simple bool) : a boolean flag that allows for turning on or off a label
xLabelOffset (int)
validTimeFrame (simple bool) : a boolean flag that allows for turning on or off a line drawn
drawPivot(pivotLevel, res, tag, pivotColor, pivotLabelColor, pivotStyle, pivotWidth, pivotExtend, isLabelValid, validTimeFrame, levelStart, pivotLabelXOffset)
draw a pivot line - the line starts one day into the past
Parameters:
pivotLevel (float) : series of the pivot point
res (simple string) : the desired resolution
tag (simple string) : the text to appear
pivotColor (simple color) : the color of the line
pivotLabelColor (simple color) : the color of the label
pivotStyle (simple string) : the line style
pivotWidth (simple int) : the line width
pivotExtend (simple string) : extend the line
isLabelValid (simple bool) : boolean param allows to turn label on and off
validTimeFrame (simple bool) : only draw the line and label at a valid timeframe
levelStart (int) : basically when to start drawing the levels
pivotLabelXOffset (int) : how much to offset the label from its current postion
@return the pivot line series
getPvsraFlagByColor(pvsraColor, redVectorColor, greenVectorColor, violetVectorColor, blueVectorColor, lightGrayCandleColor)
convert the pvsra color to an internal code
Parameters:
pvsraColor (color) : the calculated pvsra color
redVectorColor (simple color) : the user defined red vector color
greenVectorColor (simple color) : the user defined green vector color
violetVectorColor (simple color) : the user defined violet vector color
blueVectorColor (simple color) : the user defined blue vector color
lightGrayCandleColor (simple color) : the user defined regular up candle color
@return pvsra internal code
updateZones(pvsra, direction, boxArr, maxlevels, pvsraHigh, pvsraLow, pvsraOpen, pvsraClose, transperancy, zoneupdatetype, zonecolor, zonetype, borderwidth, coloroverride, redVectorColor, greenVectorColor, violetVectorColor, blueVectorColor)
a function that draws the unrecovered vector candle zones
Parameters:
pvsra (int) : internal code
direction (simple int) : above or below the current pa
boxArr (array) : the array containing the boxes that need to be updated
maxlevels (simple int) : the maximum number of boxes to draw
pvsraHigh (float) : the pvsra high value series
pvsraLow (float) : the pvsra low value series
pvsraOpen (float) : the pvsra open value series
pvsraClose (float) : the pvsra close value series
transperancy (simple int) : the transparencfy of the vecor candle zones
zoneupdatetype (simple string) : the zone update type
zonecolor (simple color) : the zone color if overriden
zonetype (simple string) : the zone type
borderwidth (simple int) : the width of the border
coloroverride (simple bool) : if the color overriden
redVectorColor (simple color) : the user defined red vector color
greenVectorColor (simple color) : the user defined green vector color
violetVectorColor (simple color) : the user defined violet vector color
blueVectorColor (simple color) : the user defined blue vector color
cleanarr(arr)
clean an array from na values
Parameters:
arr (array) : the array to clean
@return if the array was cleaned
calcPsyLevels(oneWeekMillis, showPsylevels, psyType, sydDST)
calculate the psy levels
4 hour res based on how mt4 does it
mt4 code
int Li_4 = iBarShift(NULL, PERIOD_H4, iTime(NULL, PERIOD_W1, Li_0)) - 2 - Offset;
ObjectCreate("PsychHi", OBJ_TREND, 0, Time , iHigh(NULL, PERIOD_H4, iHighest(NULL, PERIOD_H4, MODE_HIGH, 2, Li_4)), iTime(NULL, PERIOD_W1, 0), iHigh(NULL, PERIOD_H4,
iHighest(NULL, PERIOD_H4, MODE_HIGH, 2, Li_4)));
so basically because the session is 8 hours and we are looking at a 4 hour resolution we only need to take the highest high an lowest low of 2 bars
we use the gmt offset to adjust the 0000-0800 session to Sydney open which is at 2100 during dst and at 2200 otherwize. (dst - spring foward, fall back)
keep in mind sydney is in the souther hemisphere so dst is oposite of when london and new york go into dst
Parameters:
oneWeekMillis (simple int) : a constant value
showPsylevels (simple bool) : should psy levels be calculated
psyType (simple string) : the type of Psylevels - crypto or forex
sydDST (bool) : is Sydney in DST
@return
adrHiLo(length, barsBack, fromDO)
Parameters:
length (simple int) : how many elements of the series to calculate on
barsBack (simple int) : starting possition for the length calculation - current bar or some other value eg last bar
fromDO (simple bool) : boolean flag, if false calculate traditional adr from high low of today, if true calcualte from exchange midnight
@return adr, adrLow and adrHigh - the adr, the position of the adr High and adr Low with respect to price
drawSessionHiloLite(sessXTime, showRectangleX, showLabelX, sessXcolLabel, sessXLabel, gmt, sessionLineStyle, sessXcol)
Parameters:
sessXTime (simple string) : session string in the format 0000-0100:23456
showRectangleX (simple bool)
showLabelX (simple bool)
sessXcolLabel (simple color) : the color to be used for the hi/low lines and label
sessXLabel (simple string) : the session label text
gmt (simple string) : the gmt offset string usually in the format GMT+1 or GMT+2 etc
sessionLineStyle (simple string) : the line stile for the session high low lines
sessXcol (simple color) : - the color for the box color that will color the session
@return void
msToHmsString(ms)
converts milliseconds into an hh:mm string. For example, 61000 ms to '0:01:01'
Parameters:
ms (int) : - the milliseconds to convert to hh:mm
@return string - the converted hh:mm string
countdownString(openToday, closeToday, showMarketsWeekends, oneDay)
that calculates how much time is left until the next session taking the session start and end times into account. Note this function does not work on intraday sessions.
Parameters:
openToday (int) : - timestamps of when the session opens in general - note its a series because the timestamp was created using the dst flag which is a series itself thus producing a timestamp series
closeToday (int) : - timestamp of when the session closes in general - note its a series because the timestamp was created using the dst flag which is a series itself thus producing a timestamp series
@return a countdown of when next the session opens or 'Open' if the session is open now
showMarketsWeekends (simple bool)
oneDay (simple int)
countdownStringSyd(sydOpenToday, sydCloseToday, showMarketsWeekends, oneDay)
that calculates how much time is left until the next session taking the session start and end times into account. special case of intraday sessions like sydney
Parameters:
sydOpenToday (int)
sydCloseToday (int)
showMarketsWeekends (simple bool)
oneDay (simple int)
Market Structure Report Library [TradingFinder]🔵 Introduction
Market Structure is one of the most fundamental concepts in Price Action and Smart Money theory. In simple terms, it represents how price moves between highs and lows and reveals which phase of the market cycle we are currently in uptrend, downtrend, or transition.
Each structure in the market is formed by a combination of Breaks of Structure (BoS) and Changes of Character (CHoCH) :
BoS occurs when the market breaks a previous high or low, confirming the continuation of the current trend.
CHoCH occurs when price breaks in the opposite direction for the first time, signaling a potential trend reversal.
Since price movement is inherently fractal, market structure can be analyzed on two distinct levels :
Major / External Structure: represents the dominant macro trend.
Minor / Internal Structure: represents corrective or smaller-scale movements within the larger trend.
🔵 Library Purpose
The “Market Structure Report Library” is designed to automatically detect the current market structure type in real time.
Without drawing or displaying any visuals, it analyzes raw price data and returns a series of logical and textual outputs (Return Values) that describe the current structural state of the market.
It provides the following information :
Trend Type :
External Trend (Major): Up Trend, Down Trend, No Trend
Internal Trend (Minor): Up Trend, Down Trend, No Trend
Structure Type :
BoS : Confirms trend continuation
CHoCH : Indicates a potential trend reversal
Consecutive BoS Counter : Measures trend strength on both Major and Minor levels.
Candle Type : Returns the current candle’s condition(Bullish, Bearish, Doji)
This library is specifically designed for use in Smart Money–based screeners, indicators, and algorithmic strategies.
It can analyze multiple symbols and timeframes simultaneously and return the exact structure type (BoS or CHoCH) and trend direction for each.
🔵 Function Outputs
The function MS() processes the price data and returns seven key outputs,
each representing a distinct structural state of the market. These values can be used in indicators, strategies, or multi-symbol screeners.
🟣 ExternalTrend
Type : string
Description : Represents the direction of the Major (External) market structure.
Possible values :
Up Trend
Down Trend
No Trend
This is determined based on the behavior of Major Pivots (swing highs/lows).
🟣 InternalTrend
Type : string
Description : Represents the direction of the Minor (Internal) market structure.
Possible values :
Up Trend
Down Trend
No Trend
🟣 M_State
Type : string
Description : Specifies the type of the latest Major Structure event.
Possible values :
BoS
CHoCH
🟣 m_State
Type : string
Description : Specifies the type of the latest Minor Structure event.
Possible values :
BoS
CHoCH
🟣 MBoS_Counter
Type : integer
Description : Counts the number of consecutive structural breaks (BoS) in the Major structure.
Useful for evaluating trend strength :
Increasing count: indicates trend continuation.
Reset to zero: typically occurs after a CHoCH.
🟣 mBoS_Counter
Type : integer
Description : Counts the number of consecutive structural breaks in the Minor structure.
Helps analyze the micro structure of the market on lower timeframes.
Higher value : strong internal trend.
Reset : indicates a minor pullback or reversal.
🟣 Candle_Type
Type : string
Description : Represents the type of the current candle.
Possible values :
Bullish
Bearish
Doji
import TFlab/Market_Structure_Report_Library_TradingFinder/1 as MSS
PP = input.int (5 , 'Market Structure Pivot Period' , group = 'Symbol 1' )
= MSS.MS(PP)
Adaptive FoS LibraryThis library provides Adaptive Functions that I use in my scripts. For calculations, I use the max_bars_back function with a fixed length of 200 bars to prevent errors when a script tries to access data beyond its available history. This is a key difference from most other adaptive libraries — if you don’t need it, you don’t have to use it.
Some of the adaptive length functions are normalized. In addition to the adaptive length functions, this library includes various methods for calculating moving averages, normalized differences between fast and slow MA's, as well as several normalized oscillators.
utilitiesLibrary for commonly used utilities, for visualizing rolling returns, correlations and sharpe
ATR by Session Library [1CG]Library "ATRxSession"
This library shows you how big the bars usually are during a trading session. It looks only at the times you choose (like New York or London hours), measures the “true range” of every bar in that session, then finds the average for that session. It keeps the last N sessions and gives you their overall average, so you can quickly see how much the market typically moves per bar during your chosen session.
Call getSessionAtr(timezone, session, sessionCount) from your script, and it will return a single number: the average per-bar volatility during the chosen session, based on the last N completed sessions. This makes it easy to plug session-specific volatility into your own indicators or strategies.
getSessionAtr(_timezone, _session, _sessionCount)
getSessionAtr - Computes a session-aware ATR over completed sessions.
Parameters:
_timezone (string) : (string) - Timezone string to evaluate session timing.
_session (string) : (string) - Session time range string (e.g., "0930-1600").
_sessionCount (int) : (int) - Number of past completed sessions to include in the rolling average.
Returns: (float) - The average ATR across the last N completed sessions, or na if not enough data.
TimeSeriesBenchmarkMeasuresLibrary "TimeSeriesBenchmarkMeasures"
Time Series Benchmark Metrics. \
Provides a comprehensive set of functions for benchmarking time series data, allowing you to evaluate the accuracy, stability, and risk characteristics of various models or strategies. The functions cover a wide range of statistical measures, including accuracy metrics (MAE, MSE, RMSE, NRMSE, MAPE, SMAPE), autocorrelation analysis (ACF, ADF), and risk measures (Theils Inequality, Sharpness, Resolution, Coverage, and Pinball).
___
Reference:
- github.com .
- medium.com .
- www.salesforce.com .
- towardsdatascience.com .
- github.com .
mae(actual, forecasts)
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Absolute Error (MAE).
___
Reference:
- en.wikipedia.org .
- The Orange Book of Machine Learning - Carl McBride Ellis .
mse(actual, forecasts)
The Mean Squared Error (MSE) is a measure of the quality of an estimator. As it is derived from the square of Euclidean distance, it is always a positive value that decreases as the error approaches zero.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Squared Error (MSE).
___
Reference:
- en.wikipedia.org .
rmse(targets, forecasts, order, offset)
Calculates the Root Mean Squared Error (RMSE) between target observations and forecasts. RMSE is a standard measure of the differences between values predicted by a model and the values actually observed.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - RMSE value.
nmrse(targets, forecasts, order, offset)
Normalised Root Mean Squared Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - NRMSE value.
rmse_interval(targets, forecasts)
Root Mean Squared Error for a set of interval windows. Computes RMSE by converting interval forecasts (with min/max bounds) into point forecasts using the mean of the interval bounds, then compares against actual target values.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - RMSE value for the combined interval list.
mape(targets, forecasts)
Mean Average Percentual Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
Returns: - MAPE value.
smape(targets, forecasts, mode)
Symmetric Mean Average Percentual Error. Calculates the Mean Absolute Percentage Error (MAPE) between actual targets and forecasts. MAPE is a common metric for evaluating forecast accuracy, expressed as a percentage, lower values indicate a better forecast accuracy.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
mode (int) : Type of method: default=0:`sum(abs(Fi-Ti)) / sum(Fi+Ti)` , 1:`mean(abs(Fi-Ti) / ((Fi + Ti) / 2))` , 2:`mean(abs(Fi-Ti) / (abs(Fi) + abs(Ti))) * 100`
Returns: - SMAPE value.
mape_interval(targets, forecasts)
Mean Average Percentual Error for a set of interval windows.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - MAPE value for the combined interval list.
acf(data, k)
Autocorrelation Function (ACF) for a time series at a specified lag.
Parameters:
data (array) : Sample data of the observations.
k (int) : The lag period for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - The autocorrelation value at the specified lag, ranging from -1 to 1.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
acf_multiple(data, k)
Autocorrelation function (ACF) for a time series at a set of specified lags.
Parameters:
data (array) : Sample data of the observations.
k (array) : List of lag periods for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - List of ACF values for provided lags.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
adfuller(data, n_lag, conf)
: Augmented Dickey-Fuller test for stationarity.
Parameters:
data (array) : Data series.
n_lag (int) : Maximum lag.
conf (string) : Confidence Probability level used to test for critical value, (`90%`, `95%`, `99%`).
Returns: - `adf` The test statistic.
- `crit` Critical value for the test statistic at the 10 % levels.
- `nobs` Number of observations used for the ADF regression and calculation of the critical values.
___
The Augmented Dickey-Fuller test is used to determine whether a time series is stationary
or contains a unit root (non-stationary). The null hypothesis is that the series has a unit root
(is non-stationary), while the alternative hypothesis is that the series is stationary.
A stationary time series has statistical properties that do not change over time, making it
suitable for many time series forecasting models. If the test statistic is less than the
critical value, we reject the null hypothesis and conclude the series is stationary.
___
Reference:
- www.jstor.org
- en.wikipedia.org
theils_inequality(targets, forecasts)
Calculates Theil's Inequality Coefficient, a measure of forecast accuracy that quantifies the relative difference between actual and predicted values.
Parameters:
targets (array) : List of target observations.
forecasts (array) : Matrix with list of forecasts, ordered column wise.
Returns: - Theil's Inequality Coefficient value, value closer to 0 is better.
___
Theil's Inequality Coefficient is calculated as: `sqrt(Sum((y_i - f_i)^2)) / (sqrt(Sum(y_i^2)) + sqrt(Sum(f_i^2)))`
where `y_i` represents actual values and `f_i` represents forecast values.
This metric ranges from 0 to infinity, with 0 indicating perfect forecast accuracy.
___
Reference:
- en.wikipedia.org
sharpness(forecasts)
The average width of the forecast intervals across all observations, representing the sharpness or precision of the predictive intervals.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Sharpness The sharpness level, which is the average width of all prediction intervals across the forecast horizon.
___
Sharpness is an important metric for evaluating forecast quality. It measures how narrow or wide the
prediction intervals are. Higher sharpness (narrower intervals) indicates greater precision in the
forecast intervals, while lower sharpness (wider intervals) suggests less precision.
The sharpness metric is calculated as the mean of the interval widths across all observations, where
each interval width is the difference between the upper and lower bounds of the prediction interval.
Note: This function assumes that the forecasts matrix has at least 2 columns, with the first column
representing the lower bounds and the second column representing the upper bounds of prediction intervals.
___
Reference:
- Hyndman, R. J., & Athanasopoulos, G. (2018). Forecasting: principles and practice. OTexts. otexts.com
resolution(forecasts)
Calculates the resolution of forecast intervals, measuring the average absolute difference between individual forecast interval widths and the overall sharpness measure.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The average absolute difference between individual forecast interval widths and the overall sharpness measure, representing the resolution of the forecasts.
___
Resolution is a key metric for evaluating forecast quality that measures the consistency of prediction
interval widths. It quantifies how much the individual forecast intervals vary from the average interval
width (sharpness). High resolution indicates that the forecast intervals are relatively consistent
across observations, while low resolution suggests significant variation in interval widths.
The resolution is calculated as the mean absolute deviation of individual interval widths from the
overall sharpness value. This provides insight into the uniformity of the forecast uncertainty
estimates across the forecast horizon.
Note: This function requires the forecasts matrix to have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (sites.stat.washington.edu)
- (www.jstor.org)
coverage(targets, forecasts)
Calculates the coverage probability, which is the percentage of target values that fall within the corresponding forecasted prediction intervals.
Parameters:
targets (array) : List of target values.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Percent of target values that fall within their corresponding forecast intervals, expressed as a decimal value between 0 and 1 (or 0% and 100%).
___
Coverage probability is a crucial metric for evaluating the reliability of prediction intervals.
It measures how well the forecast intervals capture the actual observed values. An ideal forecast
should have a coverage probability close to the nominal confidence level (e.g., 90%, 95%, or 99%).
For example, if a 95% prediction interval is used, we expect approximately 95% of the actual
target values to fall within those intervals. If the coverage is significantly lower than the
nominal level, the intervals may be too narrow; if it's significantly higher, the intervals may
be too wide.
Note: This function requires the targets array and forecasts matrix to have the same number of
observations, and the forecasts matrix must have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (www.jstor.org)
pinball(tau, target, forecast)
Pinball loss function, measures the asymmetric loss for quantile forecasts.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
target (float) : The actual observed value to compare against.
forecast (float) : The forecasted value.
Returns: - The Pinball loss value, which quantifies the distance between the forecast and target relative to the specified quantile level.
___
The Pinball loss function is specifically designed for evaluating quantile forecasts. It is
asymmetric, meaning it penalizes underestimates and overestimates differently depending on the
quantile level being evaluated.
For a given quantile τ, the loss function is defined as:
- If target >= forecast: (target - forecast) * τ
- If target < forecast: (forecast - target) * (1 - τ)
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
pinball_mean(tau, targets, forecasts)
Calculates the mean pinball loss for quantile regression.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
targets (array) : The actual observed values to compare against.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The mean pinball loss value across all observations.
___
The pinball_mean() function computes the average Pinball loss across multiple observations,
making it suitable for evaluating overall forecast performance in quantile regression tasks.
This function leverages the asymmetric Pinball loss function to evaluate how well forecasts
capture specific quantiles of the target distribution. The choice of which column from the
forecasts matrix to use depends on the quantile level:
- For τ ≤ 0.5: Uses the first column (min) of forecasts
- For τ > 0.5: Uses the second column (max) of forecasts
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
Correlation HeatMap Matrix Data [TradingFinder]🔵 Introduction
Correlation is a statistical measure that shows the degree and direction of a linear relationship between two assets.
Its value ranges from -1 to +1 : +1 means perfect positive correlation, 0 means no linear relationship, and -1 means perfect negative correlation.
In financial markets, correlation is used for portfolio diversification, risk management, pairs trading, intermarket analysis, and identifying divergences.
Correlation HeatMap Matrix Data TradingFinder is a Pine Script v6 library that calculates and returns raw correlation matrix data between up to 20 symbols. It only provides the data – it does not draw or render the heatmap – making it ideal for use in other scripts that handle visualization or further analysis. The library uses ta.correlation for fast and accurate calculations.
It also includes two helper functions for visual styling :
CorrelationColor(corr) : takes the correlation value as input and generates a smooth gradient color, ranging from strong negative to strong positive correlation.
CorrelationTextColor(corr) : takes the correlation value as input and returns a text color that ensures optimal contrast over the background color.
Library
"Correlation_HeatMap_Matrix_Data_TradingFinder"
CorrelationColor(corr)
Parameters:
corr (float)
CorrelationTextColor(corr)
Parameters:
corr (float)
Data_Matrix(Corr_Period, Sym_1, Sym_2, Sym_3, Sym_4, Sym_5, Sym_6, Sym_7, Sym_8, Sym_9, Sym_10, Sym_11, Sym_12, Sym_13, Sym_14, Sym_15, Sym_16, Sym_17, Sym_18, Sym_19, Sym_20)
Parameters:
Corr_Period (int)
Sym_1 (string)
Sym_2 (string)
Sym_3 (string)
Sym_4 (string)
Sym_5 (string)
Sym_6 (string)
Sym_7 (string)
Sym_8 (string)
Sym_9 (string)
Sym_10 (string)
Sym_11 (string)
Sym_12 (string)
Sym_13 (string)
Sym_14 (string)
Sym_15 (string)
Sym_16 (string)
Sym_17 (string)
Sym_18 (string)
Sym_19 (string)
Sym_20 (string)
🔵 How to use
Import the library into your Pine Script using the import keyword and its full namespace.
Decide how many symbols you want to include in your correlation matrix (up to 20). Each symbol must be provided as a string, for example FX:EURUSD .
Choose the correlation period (Corr\_Period) in bars. This is the lookback window used for the calculation, such as 20, 50, or 100 bars.
Call Data_Matrix(Corr_Period, Sym_1, ..., Sym_20) with your selected parameters. The function will return an array containing the correlation values for every symbol pair (upper triangle of the matrix plus diagonal).
For example :
var string Sym_1 = '' , var string Sym_2 = '' , var string Sym_3 = '' , var string Sym_4 = '' , var string Sym_5 = '' , var string Sym_6 = '' , var string Sym_7 = '' , var string Sym_8 = '' , var string Sym_9 = '' , var string Sym_10 = ''
var string Sym_11 = '', var string Sym_12 = '', var string Sym_13 = '', var string Sym_14 = '', var string Sym_15 = '', var string Sym_16 = '', var string Sym_17 = '', var string Sym_18 = '', var string Sym_19 = '', var string Sym_20 = ''
switch Market
'Forex' => Sym_1 := 'EURUSD' , Sym_2 := 'GBPUSD' , Sym_3 := 'USDJPY' , Sym_4 := 'USDCHF' , Sym_5 := 'USDCAD' , Sym_6 := 'AUDUSD' , Sym_7 := 'NZDUSD' , Sym_8 := 'EURJPY' , Sym_9 := 'EURGBP' , Sym_10 := 'GBPJPY'
,Sym_11 := 'AUDJPY', Sym_12 := 'EURCHF', Sym_13 := 'EURCAD', Sym_14 := 'GBPCAD', Sym_15 := 'CADJPY', Sym_16 := 'CHFJPY', Sym_17 := 'NZDJPY', Sym_18 := 'AUDNZD', Sym_19 := 'USDSEK' , Sym_20 := 'USDNOK'
'Stock' => Sym_1 := 'NVDA' , Sym_2 := 'AAPL' , Sym_3 := 'GOOGL' , Sym_4 := 'GOOG' , Sym_5 := 'META' , Sym_6 := 'MSFT' , Sym_7 := 'AMZN' , Sym_8 := 'AVGO' , Sym_9 := 'TSLA' , Sym_10 := 'BRK.B'
,Sym_11 := 'UNH' , Sym_12 := 'V' , Sym_13 := 'JPM' , Sym_14 := 'WMT' , Sym_15 := 'LLY' , Sym_16 := 'ORCL', Sym_17 := 'HD' , Sym_18 := 'JNJ' , Sym_19 := 'MA' , Sym_20 := 'COST'
'Crypto' => Sym_1 := 'BTCUSD' , Sym_2 := 'ETHUSD' , Sym_3 := 'BNBUSD' , Sym_4 := 'XRPUSD' , Sym_5 := 'SOLUSD' , Sym_6 := 'ADAUSD' , Sym_7 := 'DOGEUSD' , Sym_8 := 'AVAXUSD' , Sym_9 := 'DOTUSD' , Sym_10 := 'TRXUSD'
,Sym_11 := 'LTCUSD' , Sym_12 := 'LINKUSD', Sym_13 := 'UNIUSD', Sym_14 := 'ATOMUSD', Sym_15 := 'ICPUSD', Sym_16 := 'ARBUSD', Sym_17 := 'APTUSD', Sym_18 := 'FILUSD', Sym_19 := 'OPUSD' , Sym_20 := 'USDT.D'
'Custom' => Sym_1 := Sym_1_C , Sym_2 := Sym_2_C , Sym_3 := Sym_3_C , Sym_4 := Sym_4_C , Sym_5 := Sym_5_C , Sym_6 := Sym_6_C , Sym_7 := Sym_7_C , Sym_8 := Sym_8_C , Sym_9 := Sym_9_C , Sym_10 := Sym_10_C
,Sym_11 := Sym_11_C, Sym_12 := Sym_12_C, Sym_13 := Sym_13_C, Sym_14 := Sym_14_C, Sym_15 := Sym_15_C, Sym_16 := Sym_16_C, Sym_17 := Sym_17_C, Sym_18 := Sym_18_C, Sym_19 := Sym_19_C , Sym_20 := Sym_20_C
= Corr.Data_Matrix(Corr_period, Sym_1 ,Sym_2 ,Sym_3 ,Sym_4 ,Sym_5 ,Sym_6 ,Sym_7 ,Sym_8 ,Sym_9 ,Sym_10,Sym_11,Sym_12,Sym_13,Sym_14,Sym_15,Sym_16,Sym_17,Sym_18,Sym_19,Sym_20)
Loop through or index into this array to retrieve each correlation value for your custom layout or logic.
Pass each correlation value to CorrelationColor() to get the corresponding gradient background color, which reflects the correlation’s strength and direction (negative to positive).
For example :
Corr.CorrelationColor(SYM_3_10)
Pass the same correlation value to CorrelationTextColor() to get the correct text color for readability against that background.
For example :
Corr.CorrelationTextColor(SYM_1_1)
Use these colors in a table or label to render your own heatmap or any other visualization you need.
FunctionADFLibrary "FunctionADF"
Augmented Dickey-Fuller test (ADF), The ADF test is a statistical method used to assess whether a time series is stationary – meaning its statistical properties (like mean and variance) do not change over time. A time series with a unit root is considered non-stationary and often exhibits non-mean-reverting behavior, which is a key concept in technical analysis.
Reference:
-
- rtmath.net
- en.wikipedia.org
adftest(data, n_lag, conf)
: Augmented Dickey-Fuller test for stationarity.
Parameters:
data (array) : Data series.
n_lag (int) : Maximum lag.
conf (string) : Confidence Probability level used to test for critical value, (`90%`, `95%`, `99%`).
Returns: `adf` The test statistic. \
`crit` Critical value for the test statistic at the 10 % levels. \
`nobs` Number of observations used for the ADF regression and calculation of the critical values.
AnnualizedReturnCalculatorLibrary "AnnualizedReturnCalculator"
TODO: add library description here
calculateAnnualizedReturn(isStartTime, enableLog)
Parameters:
isStartTime (bool) : 开始时间的BOOL值变量(用于标记策略开始时间)
enableLog (bool) : 是否输出日志
Returns:
返回持仓基准年化收益率、资金基准年化收益率、总收益、平均资金占用
LiliALHUNTERSystem_v2📚 **Library: LiliALHUNTERSystem_v2**
This library provides a powerful target management system for Pine Script developers.
It includes advanced calculators for EMA, RMA, and Supertrend, and introduces a central `createTargets()` function to dynamically render target lines and labels based on long/short trade logic.
🛠️ **Main Features:**
– Dynamic horizontal & vertical target lines
– Dual target configuration (Target 1 & Target 2)
– Directional logic via `isLong1`, `isLong2`
– Integrated Supertrend validation
– Visual dashboard and label display
– Works seamlessly with custom indicators
🎯 **Purpose:**
The `LiliALHUNTERSystem_v2` Library enables Pine coders to manage and visualize targets consistently across all trading strategies and indicators. It simplifies target logic while maintaining visual clarity and modular usage.
⚠️ **Disclaimer:**
This script is intended for educational and analytical purposes only. It does not constitute financial advice.
Library "LiliALHUNTERSystem_v2"
ema_calc(len, source)
Parameters:
len (simple int)
source (float)
rma_calc(len, source)
Parameters:
len (simple int)
source (float)
supertrend_calc(length, factor)
Parameters:
length (simple int)
factor (float)
createTargets(config, state, source1A, source1B, source2A, source2B)
Parameters:
config (TargetConfig)
state (TargetState)
source1A (float)
source1B (float)
source2A (float)
source2B (float)
showDashboard(state, dashLoc, textSize)
Parameters:
state (TargetState)
dashLoc (string)
textSize (string)
TargetConfig
Fields:
enableTarget1 (series bool)
enableTarget2 (series bool)
isLong1 (series bool)
isLong2 (series bool)
target1Condition (series string)
target2Condition (series string)
target1Color (series color)
target2Color (series color)
target1Style (series string)
target2Style (series string)
distTarget1 (series float)
distTarget2 (series float)
distOptions1 (series string)
distOptions2 (series string)
showLabels (series bool)
showDash (series bool)
TargetState
Fields:
target1LineV (series line)
target1LineH (series line)
target2LineV (series line)
target2LineH (series line)
target1Lbl (series label)
target2Lbl (series label)
target1Active (series bool)
target2Active (series bool)
target1Value (series float)
target2Value (series float)
countTargets1 (series int)
countTgReached1 (series int)
countTargets2 (series int)
countTgReached2 (series int)
FastMetrixLibrary "FastMetrix"
This is a library I've been tweaking and working with for a while and I find it useful to get valuable technical analysis metrics faster (why its called FastMetrix). A lot of is personal to my trading style, so sorry if it does not have everything you want. The way I get my variables from library to script is by copying the return function into my new script.
TODO: Volatility and short term price analysis functions
slope(source, smoothing)
Parameters:
source (float)
smoothing (int)
integral(topfunction, bottomfunction, start, end)
Parameters:
topfunction (float)
bottomfunction (float)
start (int)
end (int)
deviation(x, y)
Parameters:
x (float)
y (float)
getema(len)
TODO: return important exponential long term moving averages and derivatives/variables
Parameters:
len (simple int)
getsma(len)
TODO: return requested sma
Parameters:
len (int)
kc(mult, len)
TODO: Return Keltner Channels variables and calculations
Parameters:
mult (simple float)
len (simple int)
bollinger(len, mult)
TODO: returns bollinger bands with optimal settings
Parameters:
len (int)
mult (simple float)
volatility(atrlen, smoothing)
TODO: Returns volatility indicators based on atr
Parameters:
atrlen (simple int)
smoothing (int)
premarketfib()
countinday(xcondition)
Parameters:
xcondition (bool)
countinsession(condition, n)
Parameters:
condition (bool)
n (int)






















