Hybrid Triple Exponential Smoothing🙏🏻 TV, I present you HTES aka Hybrid Triple Exponential Smoothing, designed by Holt & Winters in the US, assembled by me in Saint P. I apply exponential smoothing individually to the data itself, then to residuals from the fitted values, and lastly to one-point forecast (OPF) errors, hence 'hybrid'. At the same time, the method is a closed-form solution and purely online, no need to make any recalculations & optimize anything, so the method is O(1).
^^ historical OPFs and one-point forecasting interval plotted instead of fitted values and prediction interval
Before the How-to, first let me tell you some non-obvious things about Triple Exponential smoothing (and about Exponential Smoothing in general) that not many catch. Expo smoothing seems very straightforward and obvious, but if you look deeper...
1) The whole point of exponential smoothing is its incremental/online nature, and its O(1) algorithm complexity, making it dope for high-frequency streaming data that is also univariate and has no weights. Consequently:
- Any hybrid models that involve expo smoothing and any type of ML models like gradient boosting applied to residuals rarely make much sense business-wise: if you have resources to boost the residuals, you prolly have resources to use something instead of expo smoothing;
- It also concerns the fashion of using optimizers to pick smoothing parameters; honestly, if you use this approach, you have to retrain on each datapoint, which is crazy in a streaming context. If you're not in a streaming context, why expo smoothing? What makes more sense is either picking smoothing parameters once, guided by exogenous info, or using dynamic ones calculated in a minimalistic and elegant way (more on that in further drops).
2) No matter how 'right' you choose the smoothing parameters, all the resulting components (level, trend, seasonal) are not pure; each of them contains a bit of info from the other components, this is just how non-sequential expo smoothing works. You gotta know this if you wanna use expo smoothing to decompose your time series into separate components. The only pure component there, lol, is the residuals;
3) Given what I've just said, treating the level (that does contain trend and seasonal components partially) as the resulting fit is a mistake. The resulting fit is level (l) + trend (b) + seasonal (s). And from this fit, you calculate residuals;
4) The residuals component is not some kind of bad thing; it is simply the component that contains info you consciously decide not to include in your model for whatever reason;
5) Forecasting Errors and Residuals from fitted values are 2 different things. The former are deltas between the forecasts you've made and actual values you've observed, the latter are simply differences between actual datapoints and in-sample fitted values;
6) Residuals are used for in-sample prediction intervals, errors for out-of-sample forecasting intervals;
7) Choosing between single, double, or triple expo smoothing should not be based exclusively on the nature of your data, but on what you need to do as well. For example:
- If you have trending seasonal data and you wanna do forecasting exclusively within the expo smoothing framework, then yes, you need Triple Exponential Smoothing;
- If you wanna use prediction intervals for generating trend-trading signals and you disregard seasonality, then you need single (simple) expo smoothing, even on trending data. Otherwise, the trend component will be included in your model's fitted values → prediction intervals.
8) Kind of not non-obvious, but when you put one smoothing parameter to zero, you basically disregard this component. E.g., in triple expo smoothing, when you put gamma and beta to zero, you basically end up with single exponential smoothing.
^^ data smoothing, beta and gamma zeroed out, forecasting steps = 0
About the implementation
* I use a simple power transform that results in a log transform with lambda = 0 instead of the mainstream-used transformers (if you put lambda on 2 in Box-Cox, you won't get a power of 2 transform)
* Separate set of smoothing parameters for data, residuals, and errors smoothing
* Separate band multipliers for residuals and errors
* Both typical error and typical residuals get multiplied by math.sqrt(math.pi / 2) in order to approach standard deviation so you can ~use Z values and get more or less corresponding probabilities
* In script settings → style, you can switch on/off plotting of many things that get calculated internally:
- You can visualize separate components (just remember they are not pure);
- You can switch off fit and switch on OPF plotting;
- You can plot residuals and their exponentially smoothed typical value to pick the smoothing parameters for both data and residuals;
- Or you might plot errors and play with data smoothing parameters to minimize them (consult SAE aka Sum of Absolute Errors plot);
^^ nuff said
More ideas on how to use the thing
1) Use Double Exponential Smoothing (data gamma = 0) to detrend your time series for further processing (Fourier likes at least weakly stationary data);
2) Put single expo smoothing on your strategy/subaccount equity chart (data alpha = data beta = 0), set prediction interval deviation multiplier to 1, run your strat live on simulator, start executing on real market when equity on simulator hits upper deviation (prediction interval), stop trading if equity hits lower deviation on simulator. Basically, let the strat always run on simulator, but send real orders to a real market when the strat is successful on your simulator;
3) Set up the model to minimize one-point forecasting errors, put error forecasting steps to 1, now you're doing nowcasting;
4) Forecast noisy trending sine waves for fun.
^^ nuff said 2
All Good TV ∞
Series
EchoMorphicAverageLibrary "EchoMorphicAverage"
Original Self Referencing Moving Average which references
it's own output agsainst itself and the incoming source to dynamically
alter smoothness and length internally per calculation cycle.
@kaigouthro
Inputs are float length series.
Contact Me for More Dynamic Float Length Indicators.
wema(src, mod, len)
Waited Echo-Morphic Average
Parameters:
src : (float) input value
mod : (float) modifier(0-1) mix of current value
len : (float) length
Returns: output processed smoothed value
wemaStack(src, mod, len)
Stacked Multipass Waited Echo-Morphic Average
Parameters:
src : (float) input value
mod : (float) modifier(0-1) mix of current value
len : (float) length
Returns: output processed smoothed value
Range Adaptive EMA Float Series Inputuses range and change distance on arrays to allow for more control as well as any choice of input value as a controller for how tightly it grips the input signal.
Function - Sequence From SeriesFunction to create a array from a sample taken from a series (ex:. close, hlc3...).
Example - Switching LineExample of manipulating a float series to:
• switch from one source to another
• maintain a level by referencing itself
This script publication is intended for:
• Educational Purposes
Who is it for?
Anyone who wants to learn how to change the position or state of an active float series.
Min/Max Value Multiple Series FunctionTrying different solutions to find the minimum/maximum value in a set of predefined series
Back to zero: Understanding seriestype: pine series basic example
time required: 10 minutes
level: medium (need to know the "array" data variable as a generic programming concept, basic Pine syntax)
tl;dr how variables and series work in Pine
Pine is an array/vector language. That's something that twists how it behaves, and how we have to think about it. A lot of misunderstandings come from forgetting this fact. This example tries to clear that concept.
First, you need to know what an array is, and how it works in a programmig language. Also, having javascript under your belt helps too. If you don't, google "javascript array basic tutorial" is your friend :)
So, in pine arrays are called "series". Every variable is an array with values for each candle in the chart. if we do:
myVar = true
this is not a constant. It is a series of values for each candle, { true, true,....., true }
In practice, the result is the same, but we can access each of the values in the series, like myVar{0}, myVar{7}, myVar{anyNumber}....
Again, it is not a constant, since you can access/modify the each value individually
so, lets show it:
plot (myVar, clolor = gray)
this plots an horizontal line of value 1 ( 1 is equal to true ) so it's all good.
On to a more usual series:
tipicalSeries = close > open ? true : false
plot(tipicalSeries, color= blue)
This gives the expected result, a tipical up and down line with values at 1 or 0. Naturally, "tipicalSeries" is an array, the "ups" and "downs" are all stored under the same variable, indexed by the candles.
In Pine, the ZERO position in the array is the last one, which corresponds to the last candle on the right. Say you have a chart with 12 candles. The close would be the closing value of what we intuitively think as first candle, the one on the left. then close ... and so on.... until close , the value of the "last" candle, the one on the right. It actually helps to start thinking of the positions backwards, counting down to zero, rocket launch style :)
And back to our series. The myVar will also be the same size, from myVar to myVar .
When we do some operation with them, something simple like
if ( myVar == tipicalSeries)
what is really happening is that internally, Pine is checking each of the indexes, as in myVar == tipicalSeries , myVar == tipicalSeries .... myVar == tipicalSeries
And we can store that stuff to check it. simply:
result = (myVar == tipicalSeries) ? true : false //yes, this is the same as tipicalSeries, but we're not in a boolean logic tut ;)
plot (result)
The reason we can plot the result is that it is an array, not a single value. The example indicator i provide shows a plot where the values are obtained from different places in the array, this line here:
mySeries3 = mySeries2 and mySeries1
this creates a series that is the result of the PREVIOUS values stored (the zero index is the one most at the right, or the "current" one), which here just causes a shift in the plotted line by one candle.
Go ahead, grab a copy of my code, try to change the indexes and see the results. Understanding this stuff is critical to go deeper into Pine :)
Substratum Module [snowsilence]This module is meant to act as a framework and platform over which to develop other indicators. On its own it does essentially nothing, yet simplifies the work of adding basic customizations and flexibility to ideas immediately. The chart on this post is not a demo, so its better to just try adding the indicator to a test chart — you may find it more convenient to set "overlay=true" in the study header — and look into the settings for an intuitive sense of its purpose.
Please build off of this, let me know if you find it useful, and credit/reference me where it seems reasonable. Feedback is always appreciated!