PINE LIBRARY
DafeRLMLLib

DafeRLMLLib: The Reinforcement Learning & Machine Learning Engine
This is not an indicator. This is an artificial intelligence. A state-based, self-learning engine designed to bring the power of professional quantitative finance to the Pine Script ecosystem. Welcome to the next frontier of trading analysis.
█ CHAPTER 1: THE PHILOSOPHY - FROM STATIC RULES TO DYNAMIC LEARNING
Technical analysis has, for a century, been a discipline of static, human-defined rules. "If RSI is below 30, then buy." "If the 50 EMA crosses the 200 EMA, then sell." These are fixed heuristics. They are brittle. They fail to adapt to the market's ever-changing personality—its shifts between trend and range, high and low volatility, risk-on and risk-off sentiment. An indicator built on static rules is an automaton, destined to fail when the environment it was designed for inevitably changes.
The DafeRLMLLib was created to shatter this paradigm. It is not a tool with fixed rules; it is a framework for discovering optimal rules. It is a true Reinforcement Learning (RL) and Machine Learning (ML) engine, built from the ground up in Pine Script. Its purpose is not to follow a pre-programmed strategy, but to learn a strategy through trial, error, and feedback.
This library provides a complete, professional-grade toolkit for developers to build indicators that think, adapt, and evolve. It observes the market state, selects an action, receives a reward signal based on the outcome, and updates its internal "brain" to improve its future decisions. This is not just a step forward; it is a quantum leap into the future of on-chart intelligence.
█ CHAPTER 2: THE CORE INNOVATIONS - WHAT MAKES THIS A TRUE ML ENGINE?
This library is not a collection of simple moving averages labeled as "AI." It is a suite of genuine, academically recognized machine learning algorithms, adapted for the unique constraints and opportunities of the Pine Script environment.
Multi-Algorithm Architecture: You are not locked into one learning model. The library provides a choice of powerful RL algorithms:
Q-Learning with TD(λ) Eligibility Traces: A classic, robust algorithm for learning state-action values. We've enhanced it with eligibility traces (Lambda), allowing the agent to more efficiently assign credit or blame to a sequence of past actions, dramatically speeding up the learning process.
REINFORCE Policy Gradient with Baseline: A more advanced method that directly learns a "policy"—a probability distribution over actions—instead of just values. The baseline helps to stabilize learning by reducing variance.
Actor-Critic Architecture: The state-of-the-art. This hybrid model combines the best of both worlds. The "Actor" (the policy) decides what to do, and the "Critic" (the value function) evaluates how good that action was. The Critic's feedback is then used to directly improve the Actor's decisions.
Prioritized Experience Replay: Like a human, the AI learns more from surprising or significant events. Instead of learning from experiences in a simple chronological order, the library stores them in a ReplayBuffer. It then replays these memories to the learning algorithms, prioritizing experiences that resulted in a large prediction error. This makes learning incredibly efficient.
Meta-Learning & Self-Tuning: An AI that cannot learn how to learn is still a dumb machine. The MetaState module is a meta-learning layer that monitors the agent's own performance over time. If it detects that performance is degrading, it will automatically increase the learning rate ("Synaptic Plasticity"). If performance is improving, it will decrease the learning rate to stabilize the learned strategy. It tunes its own hyperparameters.
Catastrophic Forgetting Prevention: A common failure mode for simple neural networks is "catastrophic forgetting," where learning a new task completely erases knowledge of a previous one. This library includes mechanisms like soft_reset and L2 regularization to prevent the agent's learned weights from exploding or being wiped out by a single bad run of trades, ensuring more stable, long-term learning.
The Universal Socket Interface: How does the AI "see" the market? Through DataSockets. This brilliant, extensible interface allows a developer to connect any data series—an RSI, a volume metric, a volatility reading, a custom calculation—to the AI's "brain." Each socket normalizes its input, tracks its own statistics, and feeds into the state-building process. This makes the library universally adaptable to any trading idea.
█ CHAPTER 3: A DUAL-PURPOSE FRAMEWORK - MODES OF OPERATION
This library is a foundational component of the DAFE AI ecosystem, designed for ultimate flexibility. It can be used in two primary modes: as a powerful standalone intelligence, or as the core cognitive engine within a larger, bridged super-system. Understanding these modes is key to unlocking its full potential.
MODE 1: STANDALONE ENGINE OPERATION (Independent Power
The DafeRLMLLib can be used entirely on its own to create a complete, self-learning trading indicator. This approach is perfect for building focused, single-purpose tools that are designed to master a specific task. In this mode, the developer is responsible for creating the full feedback loop within their own indicator script.
The Workflow:
Your indicator initializes the ML agent.
On each bar, it feeds the agent market data via the socket interface.
It asks the agent for an action (e.g., Buy, Sell, Hold).
Your script then executes its own internal trade logic based on the agent's decision.
Your script is responsible for tracking the Profit & Loss (PnL) of the resulting simulated trade.
When the trade is closed, your script feeds the final PnL directly back into the agent's learn() function as the "reward" signal.
The Result: A pure, state-based learning system. The agent directly learns the consequences of its own actions. This is excellent for discovering novel, micro-level trading patterns and for building indicators that are designed to operate with complete autonomy.
MODE 2: BRIDGED SUPER-SYSTEM OPERATION (Synergistic Intelligence)
This is the pinnacle of the DAFE ecosystem. In this advanced mode, the DafeRLMLLib acts as the core "cognitive engine" or the "tactical brain" within a larger, multi-library system. It can be fused with a strategic portfolio management engine (like the DafeSPALib) via a master communication protocol (the DafeMLSPABridge).
The Workflow:
The ML engine (this library) generates a set of creative, state-based proposals or predictions.
The Bridge Library translates these proposals into a portfolio of micro-strategies.
The SPA (Strategy Portfolio Allocation) engine, acting as a high-level manager, analyzes the real-time performance of these micro-strategies and selects the one it trusts the most. This becomes the final decision. The PnL from the SPA's final, performance-vetted decision is then routed back through the Bridge as a highly-qualified reward signal for the ML engine.
The Result: A hybrid intelligence that is more robust and adaptive than either system alone. The ML engine provides tactical creativity, while the SPA engine provides ruthless, strategic, performance-based oversight. The ML proposes, the SPA disposes, and the ML learns from the SPA's wisdom. This creates a system of checks, balances, and continuous, synergistic learning, perfect for building an ultimate, all-in-one "drawing indicator" or trading system.
As a developer, the choice is yours. Use this library independently to build powerful, specialized learning tools, or use it as the foundational brain for a truly comprehensive trading AI.
█ CHAPTER 4: A GUIDE FOR DEVELOPERS - INTEGRATING THE BRAIN
We have made it incredibly simple to bring your indicators to life with the DAFE AI. This is the true purpose of the library—to empower you. This section provides the full, unabridged input template and usage guide.
[u]PART I: THE INPUTS TEMPLATE[/u]
To give your users full control over the AI, copy this entire block of inputs into your indicator script. It is professionally organized with groups and detailed tooltips.
// ╔═════════════════════════════════════════════════════╗
// ║ INPUTS TEMPLATE (COPY INTO YOUR SCRIPT) ║
// ╚═════════════════════════════════════════════════════╝
Pine Script®
[u]PART II: THE IMPLEMENTATION LOGIC[/u]
This is the boilerplate code you will adapt to your indicator. It shows the complete Observe-Act-Learn loop.
// ╔═══════════════════════════════════════════════════════╗
// ║ USAGE EXAMPLE (ADAPT TO YOUR SCRIPT) ║
// ╚═══════════════════════════════════════════════════════╝
Pine Script®
█ DEVELOPMENT PHILOSOPHY
The DafeRLMLLib was born from a desire to push the boundaries of Pine Script and to empower the entire TradingView developer community. We believe that the future of technical analysis is not just in creating more complex algorithms, but in building systems that can learn, adapt, and optimize themselves. This library is an open-source framework designed to be a launchpad for a new generation of truly intelligent indicators on TradingView.
This library is designed to help you and your users discover what "the best trades" are, not by following a fixed set of rules, but by learning from the market's own feedback, one trade at a time.
█ DISCLAIMER & IMPORTANT NOTES
THIS IS A LIBRARY FOR ADVANCED DEVELOPERS: This script does nothing on its own. It is a powerful engine that must be integrated into other indicators.
REINFORCEMENT LEARNING IS COMPLEX: RL is not a magic bullet. It requires careful feature engineering (choosing the right sockets), a well-defined reward signal, and a sufficient amount of training data (trades) to converge on a profitable strategy.
ALL TRADING INVOLVES RISK: The AI's decisions are based on statistical probabilities learned from past data. It does not predict the future with certainty.
"The goal of a successful trader is to make the best trades. Money is secondary."
— Alexander Elder
Taking you to school. - Dskyz, Create with RL.
This is not an indicator. This is an artificial intelligence. A state-based, self-learning engine designed to bring the power of professional quantitative finance to the Pine Script ecosystem. Welcome to the next frontier of trading analysis.
█ CHAPTER 1: THE PHILOSOPHY - FROM STATIC RULES TO DYNAMIC LEARNING
Technical analysis has, for a century, been a discipline of static, human-defined rules. "If RSI is below 30, then buy." "If the 50 EMA crosses the 200 EMA, then sell." These are fixed heuristics. They are brittle. They fail to adapt to the market's ever-changing personality—its shifts between trend and range, high and low volatility, risk-on and risk-off sentiment. An indicator built on static rules is an automaton, destined to fail when the environment it was designed for inevitably changes.
The DafeRLMLLib was created to shatter this paradigm. It is not a tool with fixed rules; it is a framework for discovering optimal rules. It is a true Reinforcement Learning (RL) and Machine Learning (ML) engine, built from the ground up in Pine Script. Its purpose is not to follow a pre-programmed strategy, but to learn a strategy through trial, error, and feedback.
This library provides a complete, professional-grade toolkit for developers to build indicators that think, adapt, and evolve. It observes the market state, selects an action, receives a reward signal based on the outcome, and updates its internal "brain" to improve its future decisions. This is not just a step forward; it is a quantum leap into the future of on-chart intelligence.
█ CHAPTER 2: THE CORE INNOVATIONS - WHAT MAKES THIS A TRUE ML ENGINE?
This library is not a collection of simple moving averages labeled as "AI." It is a suite of genuine, academically recognized machine learning algorithms, adapted for the unique constraints and opportunities of the Pine Script environment.
Multi-Algorithm Architecture: You are not locked into one learning model. The library provides a choice of powerful RL algorithms:
Q-Learning with TD(λ) Eligibility Traces: A classic, robust algorithm for learning state-action values. We've enhanced it with eligibility traces (Lambda), allowing the agent to more efficiently assign credit or blame to a sequence of past actions, dramatically speeding up the learning process.
REINFORCE Policy Gradient with Baseline: A more advanced method that directly learns a "policy"—a probability distribution over actions—instead of just values. The baseline helps to stabilize learning by reducing variance.
Actor-Critic Architecture: The state-of-the-art. This hybrid model combines the best of both worlds. The "Actor" (the policy) decides what to do, and the "Critic" (the value function) evaluates how good that action was. The Critic's feedback is then used to directly improve the Actor's decisions.
Prioritized Experience Replay: Like a human, the AI learns more from surprising or significant events. Instead of learning from experiences in a simple chronological order, the library stores them in a ReplayBuffer. It then replays these memories to the learning algorithms, prioritizing experiences that resulted in a large prediction error. This makes learning incredibly efficient.
Meta-Learning & Self-Tuning: An AI that cannot learn how to learn is still a dumb machine. The MetaState module is a meta-learning layer that monitors the agent's own performance over time. If it detects that performance is degrading, it will automatically increase the learning rate ("Synaptic Plasticity"). If performance is improving, it will decrease the learning rate to stabilize the learned strategy. It tunes its own hyperparameters.
Catastrophic Forgetting Prevention: A common failure mode for simple neural networks is "catastrophic forgetting," where learning a new task completely erases knowledge of a previous one. This library includes mechanisms like soft_reset and L2 regularization to prevent the agent's learned weights from exploding or being wiped out by a single bad run of trades, ensuring more stable, long-term learning.
The Universal Socket Interface: How does the AI "see" the market? Through DataSockets. This brilliant, extensible interface allows a developer to connect any data series—an RSI, a volume metric, a volatility reading, a custom calculation—to the AI's "brain." Each socket normalizes its input, tracks its own statistics, and feeds into the state-building process. This makes the library universally adaptable to any trading idea.
█ CHAPTER 3: A DUAL-PURPOSE FRAMEWORK - MODES OF OPERATION
This library is a foundational component of the DAFE AI ecosystem, designed for ultimate flexibility. It can be used in two primary modes: as a powerful standalone intelligence, or as the core cognitive engine within a larger, bridged super-system. Understanding these modes is key to unlocking its full potential.
MODE 1: STANDALONE ENGINE OPERATION (Independent Power
The DafeRLMLLib can be used entirely on its own to create a complete, self-learning trading indicator. This approach is perfect for building focused, single-purpose tools that are designed to master a specific task. In this mode, the developer is responsible for creating the full feedback loop within their own indicator script.
The Workflow:
Your indicator initializes the ML agent.
On each bar, it feeds the agent market data via the socket interface.
It asks the agent for an action (e.g., Buy, Sell, Hold).
Your script then executes its own internal trade logic based on the agent's decision.
Your script is responsible for tracking the Profit & Loss (PnL) of the resulting simulated trade.
When the trade is closed, your script feeds the final PnL directly back into the agent's learn() function as the "reward" signal.
The Result: A pure, state-based learning system. The agent directly learns the consequences of its own actions. This is excellent for discovering novel, micro-level trading patterns and for building indicators that are designed to operate with complete autonomy.
MODE 2: BRIDGED SUPER-SYSTEM OPERATION (Synergistic Intelligence)
This is the pinnacle of the DAFE ecosystem. In this advanced mode, the DafeRLMLLib acts as the core "cognitive engine" or the "tactical brain" within a larger, multi-library system. It can be fused with a strategic portfolio management engine (like the DafeSPALib) via a master communication protocol (the DafeMLSPABridge).
The Workflow:
The ML engine (this library) generates a set of creative, state-based proposals or predictions.
The Bridge Library translates these proposals into a portfolio of micro-strategies.
The SPA (Strategy Portfolio Allocation) engine, acting as a high-level manager, analyzes the real-time performance of these micro-strategies and selects the one it trusts the most. This becomes the final decision. The PnL from the SPA's final, performance-vetted decision is then routed back through the Bridge as a highly-qualified reward signal for the ML engine.
The Result: A hybrid intelligence that is more robust and adaptive than either system alone. The ML engine provides tactical creativity, while the SPA engine provides ruthless, strategic, performance-based oversight. The ML proposes, the SPA disposes, and the ML learns from the SPA's wisdom. This creates a system of checks, balances, and continuous, synergistic learning, perfect for building an ultimate, all-in-one "drawing indicator" or trading system.
As a developer, the choice is yours. Use this library independently to build powerful, specialized learning tools, or use it as the foundational brain for a truly comprehensive trading AI.
█ CHAPTER 4: A GUIDE FOR DEVELOPERS - INTEGRATING THE BRAIN
We have made it incredibly simple to bring your indicators to life with the DAFE AI. This is the true purpose of the library—to empower you. This section provides the full, unabridged input template and usage guide.
[u]PART I: THE INPUTS TEMPLATE[/u]
To give your users full control over the AI, copy this entire block of inputs into your indicator script. It is professionally organized with groups and detailed tooltips.
// ╔═════════════════════════════════════════════════════╗
// ║ INPUTS TEMPLATE (COPY INTO YOUR SCRIPT) ║
// ╚═════════════════════════════════════════════════════╝
// INPUT GROUPS
string G_RL_AGENT = "═══════════ 🧠 AGENT CONFIGURATION ════════════"
string G_RL_LEARN = "═══════════ 📚 LEARNING PARAMETERS ═══════════"
string G_RL_REWARD = "═══════════ 💰 REWARD SYSTEM ═══════════════"
string G_RL_REPLAY = "═══════════ 📼 EXPERIENCE REPLAY ════════════"
string G_RL_META = "═══════════ 🔮 META-LEARNING ═══════════════"
string G_RL_DASH = "═══════════ 📋 DIAGNOSTICS DASHBOARD ═════════"
// AGENT CONFIGURATION
string i_rl_algorithm = input.string("Actor-Critic", "🤖 Algorithm",
options=["Q-Learning", "Policy Gradient", "Actor-Critic", "Ensemble"], group=G_RL_AGENT,
tooltip="Selects the core learning algorithm.\n\n" +
"• Q-Learning: Classic, robust, and fast for discrete states. Learns the 'value' of actions.\n" +
"• Policy Gradient: Learns a direct probability distribution over actions.\n" +
"• Actor-Critic: The state-of-the-art. The 'Actor' decides, the 'Critic' evaluates.\n" +
"• Ensemble: Runs both Q-Learning and Policy Gradient and chooses the action with the highest confidence.\n\n" +
"RECOMMENDATION: Start with 'Q-Learning' for stability or 'Actor-Critic' for performance.")
int i_rl_num_features = input.int(8, "Number of Features (Sockets)", minval=2, maxval=12, group=G_RL_AGENT,
tooltip="Defines the size of the AI's 'vision'. This MUST match the number of sockets you connect.")
int i_rl_num_actions = input.int(3, "Number of Actions", minval=2, maxval=5, group=G_RL_AGENT,
tooltip="Defines what the AI can do. 3 is standard (0=Neutral, 1=Buy, 2=Sell).")
// LEARNING PARAMETERS
float i_rl_learning_rate = input.float(0.05, "🎓 Learning Rate (Alpha)", minval=0.001, maxval=0.2, step=0.005, group=G_RL_LEARN,
tooltip="How strongly the AI updates its knowledge. Low (0.01-0.03) is stable. High (0.1+) is aggressive.")
float i_rl_discount = input.float(0.95, "🔮 Discount Factor (Gamma)", minval=0.8, maxval=0.99, step=0.01, group=G_RL_LEARN,
tooltip="Determines the agent's 'foresight'. High (0.95+) for trend following. Low (0.85) for scalping.")
float i_rl_epsilon = input.float(0.15, "🧭 Exploration Rate (Epsilon)", minval=0.01, maxval=0.5, step=0.01, group=G_RL_LEARN,
tooltip="For Q-Learning. The probability of taking a random action to explore. Decays automatically over time.")
float i_rl_lambda = input.float(0.7, "⚡ Eligibility Trace (Lambda)", minval=0.0, maxval=0.95, step=0.05, group=G_RL_LEARN,
tooltip="For Q-Learning. A powerful accelerator that allows a reward to be 'traced' back through a sequence of actions.")
// REWARD SYSTEM
string i_rl_reward_mode = input.string("Normalized", "💰 Reward Shaping Mode",
options=["Raw PnL", "Normalized", "Asymmetric", "Risk-Adjusted"], group=G_RL_REWARD,
tooltip="Modifies the raw PnL reward signal to guide learning.\n\n" +
"• Normalized: Creates a stable reward signal (Recommended).\n" +
"• Asymmetric: Punishes losses more than it rewards gains. Teaches risk aversion.\n" +
"• Risk-Adjusted: Divides PnL by risk (e.g., ATR). Teaches better risk/reward.")
// EXPERIENCE REPLAY
bool i_rl_use_replay = input.bool(true, "📼 Enable Experience Replay", group=G_RL_REPLAY,
tooltip="Allows the agent to store and re-learn from past experiences. Dramatically improves learning stability. HIGHLY RECOMMENDED.")
int i_rl_replay_capacity = input.int(500, "Replay Buffer Size", minval=100, maxval=2000, group=G_RL_REPLAY)
int i_rl_replay_batch = input.int(4, "Replay Batch Size", minval=1, maxval=10, group=G_RL_REPLAY)
// META-LEARNING
bool i_rl_use_meta = input.bool(true, "🔮 Enable Meta-Learning", group=G_RL_META,
tooltip="Allows the agent to self-tune its own learning rate based on performance trends.")
// DIAGNOSTICS DASHBOARD
bool i_rl_show_dash = input.bool(true, "📋 Show Diagnostics Dashboard", group=G_RL_DASH)
[u]PART II: THE IMPLEMENTATION LOGIC[/u]
This is the boilerplate code you will adapt to your indicator. It shows the complete Observe-Act-Learn loop.
// ╔═══════════════════════════════════════════════════════╗
// ║ USAGE EXAMPLE (ADAPT TO YOUR SCRIPT) ║
// ╚═══════════════════════════════════════════════════════╝
// 1. INITIALIZE THE AGENT (happens only on the first bar)
int algo_id = i_rl_algorithm == "Q-Learning" ? 0 : i_rl_algorithm == "Policy Gradient" ? 1 : i_rl_algorithm == "Actor-Critic" ? 2 : 3
int reward_id = i_rl_reward_mode == "Raw PnL" ? 0 : i_rl_reward_mode == "Normalized" ? 1 : i_rl_reward_mode == "Asymmetric" ? 2 : 3
var rl.RLAgent agent = rl.init(algo_id, i_rl_num_features, i_rl_num_actions, i_rl_learning_rate, 54, i_rl_replay_capacity, i_rl_epsilon, i_rl_discount, i_rl_lambda, reward_id)
// 2. CONNECT THE "SENSES" (happens only on the first bar)
if barstate.isfirst
// Connect your indicator's data series to the AI's sockets. The number MUST match 'i_rl_num_features'.
agent := rl.connect_socket(agent, "rsi", ta.rsi(close, 14), "oscillator", 1.0)
agent := rl.connect_socket(agent, "atr_norm", ta.atr(14)/close*100, "custom", 0.8)
// ... connect all other features ...
// 3. THE MAIN LOOP (Observe -> Act -> Learn) - runs on every bar
var bool in_trade = false
var int trade_direction = 0
var float entry_price = 0.0
var int last_state_hash = 0
var int last_action_taken = 0
// --- OBSERVE: Build the current market state ---
rl.RLState current_state = rl.build_state(agent)
// --- ACT: Ask the AI for a decision ---
[rl.RLAction ai_action, rl.RLAgent updated_agent] = rl.select_action(agent, current_state)
agent := updated_agent // CRITICAL: Always update the agent state
// --- EXECUTE: Your custom trade logic goes here ---
if not in_trade and ai_action.action != 0 // Assuming 0 is "Hold"
in_trade := true
trade_direction := ai_action.action == 1 ? 1 : -1 // Assuming 1=Buy, 2=Sell
entry_price := close
last_state_hash := current_state.hash // Store the state at the moment of entry
last_action_taken := ai_action.action
// --- LEARN: Check for trade closure and provide feedback ---
bool trade_is_closed = false
float reward = 0.0
if in_trade
// Your custom exit condition here (e.g., stop loss, take profit, opposite signal)
bool exit_condition = bar_index > ta.valuewhen(in_trade, bar_index, 0) + 20
if exit_condition
trade_is_closed := true
pnl = trade_direction == 1 ? (close - entry_price) / entry_price : (entry_price - close) / entry_price
reward := pnl * 100
in_trade := false
// If a trade was closed on THIS bar, feed the experience to the AI
if trade_is_closed
agent := rl.learn(agent, last_state_hash, last_action_taken, reward, current_state, true)
// 4. DISPLAY DIAGNOSTICS
if i_rl_show_dash and barstate.islast
string diag_text = rl.diagnostics(agent)
label.new(bar_index, high, diag_text, style=label.style_label_down, color=color.new(#0A0A14, 10), textcolor=#00FF41, size=size.small, textalign=text.align_left)
█ DEVELOPMENT PHILOSOPHY
The DafeRLMLLib was born from a desire to push the boundaries of Pine Script and to empower the entire TradingView developer community. We believe that the future of technical analysis is not just in creating more complex algorithms, but in building systems that can learn, adapt, and optimize themselves. This library is an open-source framework designed to be a launchpad for a new generation of truly intelligent indicators on TradingView.
This library is designed to help you and your users discover what "the best trades" are, not by following a fixed set of rules, but by learning from the market's own feedback, one trade at a time.
█ DISCLAIMER & IMPORTANT NOTES
THIS IS A LIBRARY FOR ADVANCED DEVELOPERS: This script does nothing on its own. It is a powerful engine that must be integrated into other indicators.
REINFORCEMENT LEARNING IS COMPLEX: RL is not a magic bullet. It requires careful feature engineering (choosing the right sockets), a well-defined reward signal, and a sufficient amount of training data (trades) to converge on a profitable strategy.
ALL TRADING INVOLVES RISK: The AI's decisions are based on statistical probabilities learned from past data. It does not predict the future with certainty.
"The goal of a successful trader is to make the best trades. Money is secondary."
— Alexander Elder
Taking you to school. - Dskyz, Create with RL.
Biblioteka Pine
W zgodzie z duchem TradingView autor opublikował ten kod Pine jako bibliotekę open-source, aby inni programiści Pine z naszej społeczności mogli go ponownie wykorzystać. Ukłony dla autora. Można korzystać z tej biblioteki prywatnie lub w innych publikacjach open-source, jednak ponowne wykorzystanie tego kodu w publikacjach podlega Zasadom serwisu.
Empowering everyday traders and DAFE Trading Systems
DAFETradingSystems.com
DAFETradingSystems.com
Wyłączenie odpowiedzialności
Informacje i publikacje nie stanowią i nie powinny być traktowane jako porady finansowe, inwestycyjne, tradingowe ani jakiekolwiek inne rekomendacje dostarczane lub zatwierdzone przez TradingView. Więcej informacji znajduje się w Warunkach użytkowania.
Biblioteka Pine
W zgodzie z duchem TradingView autor opublikował ten kod Pine jako bibliotekę open-source, aby inni programiści Pine z naszej społeczności mogli go ponownie wykorzystać. Ukłony dla autora. Można korzystać z tej biblioteki prywatnie lub w innych publikacjach open-source, jednak ponowne wykorzystanie tego kodu w publikacjach podlega Zasadom serwisu.
Empowering everyday traders and DAFE Trading Systems
DAFETradingSystems.com
DAFETradingSystems.com
Wyłączenie odpowiedzialności
Informacje i publikacje nie stanowią i nie powinny być traktowane jako porady finansowe, inwestycyjne, tradingowe ani jakiekolwiek inne rekomendacje dostarczane lub zatwierdzone przez TradingView. Więcej informacji znajduje się w Warunkach użytkowania.