**Step 1: Install Desktop Services and VNC**

First, you need to install desktop services and a VNC server to enable remote desktop access. Here, we will use xfce and TightVNC as examples. Execute the following commands in the terminal to install:

sudo apt update sudo apt install xfce4 xfce4-goodies dbus-x11 sudo apt install tightvncserver tightvncserver

Please note that the maximum length for the password during installation is 8 characters. Please set a highly secure password. The default startup port for the first session is 5901.

**Step 2: Connect to VNC and Install IB Gateway**

The default address is `vnc://IP Address:5901`

, you can log in by entering the password. For Windows, please download and install the VNC client yourself.

Download page: https://www.interactivebrokers.com/en/trading/ibgateway-stable.php

Please use a tool similar to wget for downloading. If you can't find the corresponding version, please click on "Download for Other Operating Systems" on the page to search.

wget https://download2.interactivebrokers.com/installers/ibgateway/stable-standalone/ibgateway-stable-standalone-linux-x64.sh

If it's inconvenient to download within VNC, you can initiate a separate SSH download and then install it under the VNC desktop environment.

bash ibgateway-stable-standalone-linux-x64.sh

The interface can already be displayed here, you can manually run the installation directory directly by running `./ibgateway`

.

After installation, log in and find the API option. Make sure to uncheck "Read-Only API". The port number is also in the settings. Please configure the exchange correctly according to this port number.

The exchange is configured as follows: Client ID. If you have multiple robots that need to connect, this needs to be set to different IDs, as IB does not allow the same Client ID to connect simultaneously.

It should be noted that localhost and 127.0.0.1 are not the same network address at the lower level of the Linux operating system, here we use localhost.

IB's market data requires a paid subscription. If you need real-time ticker and depth information, please subscribe for a fee, otherwise you can only receive delayed tickers.

]]>This section uses the same data as the previous few articles, so it won't be repeated here.

Low-priced currencies usually refer to digital currencies with lower unit prices. These currencies are more attractive to small investors due to their low prices. Most people only see many zeros in the price but don't care much about the market value. Each unit reduction (zero) means that the price is multiplied by 10, which is very attractive to some people, but it may also be accompanied by higher price volatility and risk.

As usual, let's first look at the performance of the index, with two bull markets at the beginning and end of the year. Every week we select the 20 lowest priced currencies, and the results are very close to those of indicators, indicating that low prices do not provide too much additional return.

h = 1 lower_index = 1 lower_index_list = [1] lower_symbols = df_close.iloc[0].dropna().sort_values()[:20].index lower_prices = df_close.iloc[0][lower_symbols] date_list = [df_close.index[0]] for row in df_close.iterrows(): if h % 42 == 0: date_list.append(row[0]) lower_index = lower_index * (row[1][lower_symbols] / lower_prices).mean() lower_index_list.append(lower_index) lower_symbols = row[1].dropna().sort_values()[:20].index lower_prices = row[1][lower_symbols] h += 1 pd.DataFrame(data=lower_index_list,index=date_list).plot(figsize=(12,5),grid=True); total_index.plot(figsize=(12,5),grid=True); #overall index

Due to the constantly changing circulation, the market value calculation here uses the total supply volume, with data sourced from Coincapmarket. Those who need it can apply for a key. A total of 1000 currencies with the highest market values were selected. Due to naming methods and unknown total supplies, we obtained 205 currencies that overlap with Binance perpetual contracts.

import requests defget_latest_crypto_listings(api_key): url = "https://pro-api.coinmarketcap.com/v1/cryptocurrency/listings/latest?limit=1000" headers = { 'Accepts': 'application/json', 'X-CMC_PRO_API_KEY': api_key, } response = requests.get(url, headers=headers) if response.status_code == 200: return response.json() else: returnf"Error: {response.status_code}"# Use your API key api_key = "xxx" coin_data = get_latest_crypto_listings(api_key) supplys = {d['symbol']: d['total_supply'] for d in coin_data['data']} include_symbols = [s for s inlist(df_close.columns) if s in supplys and supplys[s] > 0 ]

An index is drawn from the 10 cryptocurrencies with the lowest market value each week, and compared with the overall index. It can be seen that small-cap cryptocurrencies performed slightly better than the overall index in the bull market at the beginning of the year. However, they started to rise ahead of time during September-October's sideways movement, and their final increase far exceeded that of the total index.

Small-cap cryptocurrencies are often considered to have higher growth potential. Because their market values are low, even relatively small inflows of funds can cause significant price changes. This potential for high returns attracts investors and speculators' attention. When there is a stir at bottom markets, due to less resistance to rise, small-cap currencies often take off first and may even indicate that a general rising bull market is about to begin.

df_close_include = df_close[include_symbols] df_norm = df_close_include/df_close_include.fillna(method='bfill').iloc[0] #Normalization total_index = df_norm.mean(axis=1) h = 1 N = 10 lower_index = 1 lower_index_list = [1] lower_symbols = df_close_include.iloc[0].dropna().multiply(pd.Series(supplys)[include_symbols], fill_value=np.nan).sort_values()[:N].index lower_prices = df_close_include.iloc[0][lower_symbols] date_list = [df_close_include.index[0]] for row in df_close_include.iterrows(): if h % 42 == 0: date_list.append(row[0]) lower_index = lower_index * (row[1][lower_symbols] / lower_prices).mean() lower_index_list.append(lower_index) lower_symbols = row[1].dropna().multiply(pd.Series(supplys)[include_symbols], fill_value=np.nan).sort_values()[:N].index lower_prices = row[1][lower_symbols] h += 1 pd.DataFrame(data=lower_index_list,index=date_list).plot(figsize=(12,5),grid=True); total_index.plot(figsize=(12,5),grid=True);

This article, through data analysis, found that low-priced currencies did not provide additional returns and their performance was close to the market index. The performance of small market cap currencies significantly exceeded the overall index increase. Below is a list of contract currencies with a market value less than 100 million U for reference, even though we are currently in a bull market.

'HOOK': 102007225,

'SLP': 99406669,

'NMR': 97617143,

'RDNT': 97501392,

'MBL': 93681270,

'OMG': 89129884,

'NKN': 85700948,

'DENT': 84558413,

'ALPHA': 81367392,

'RAD': 80849568,

'HFT': 79696303,

'STMX': 79472000,

'ALICE': 74615631,

'OGN': 74226686,

'GTC': 72933069,

'MAV': 72174400,

'CTK': 72066028,

'UNFI': 71975379,

'OXT': 71727646,

'COTI': 71402243,

'HIGH': 70450329,

'DUSK': 69178891,

'ARKM': 68822057,

'HIFI': 68805227,

'CYBER': 68264478,

'BADGER': 67746045,

'AGLD': 66877113,

'LINA': 62674752,

'PEOPLE': 62662701,

'ARPA': 62446098,

'SPELL': 61939184,

'TRU': 60944721,

'REN': 59955266,

'BIGTIME': 59209269,

'XVG': 57470552,

'TLM': 56963184,

'BAKE': 52022509,

'COMBO': 47247951,

'DAR': 47226484,

'FLM': 45542629,

'ATA': 44190701,

'MDT': 42774267,

'BEL': 42365397,

'PERP': 42095057,

'REEF': 41151983,

'IDEX': 39463580,

'LEVER': 38609947,

'PHB': 36811258,

'LIT': 35979327,

'KEY': 31964126,

'BOND': 29549985,

'FRONT': 29130102,

'TOKEN': 28047786,

'AMB': 24484151

In this article, we will briefly introduce some of the main mathematicians who founded this field.

Before Bayes

To better understand Bayesian statistics, we need to go back to the 18th century and refer to mathematician De Moivre and his paper "The Doctrine of Chances".

In his paper, De Moivre solved many problems related to probability and gambling in his era. As you may know, his solution to one of these problems led to the origin of the normal distribution, but that's another story.

One of the simplest questions in his paper was:

"What is the probability of getting three heads when flipping a fair coin three times consecutively?"

Reading through the problems described in "The Doctrine of Chances", you might notice that most start with an assumption from which they calculate probabilities for given events. For example, in the above question there is an assumption that considers the coin as fair; therefore, obtaining a head during a toss has a probability of 0.5.

This would be expressed today in mathematical terms as:

Formula

𝑃(𝑋|𝜃)

However, what if we don't know whether the coin is fair? What if we don't know `𝜃`

?

Nearly fifty years later, in 1763, a paper titled "A Solution to the Problems in the Doctrine of Chances" was published in the Philosophical Transactions of the Royal Society of London.

In the first few pages of this document, there is a piece written by mathematician Richard Price that summarizes a paper his friend Thomas Bayes wrote several years before his death. In his introduction, Price explained some important discoveries made by Thomas Bayes that were not mentioned in De Moivre's "Doctrine of Chances".

In fact, he referred to one specific problem:

"Given an unknown event's number of successes and failures, find its chance between any two named degrees."

In other words, after observing an event we determine what is the probability that an unknown parameter `θ`

falls between two degrees. This is actually one of the first problems related to statistical inference in history and it gave rise to term inverse probability. In mathematical terms:

Formula

𝑃( 𝜃 | 𝑋）

This is of course what we call the posterior distribution of Bayes' theorem today.

Understanding the motivations behind the research of these two elder ministers, **Thomas Bayes** and **Richard Price**, is actually quite interesting. But to do this, we need to temporarily put aside some knowledge about statistics.

We are in the 18th century when probability is becoming an increasingly interesting field for mathematicians. Mathematicians like de Moivre or Bernoulli have already shown that some events occur with a certain degree of randomness but are still governed by fixed rules. For example, if you roll a dice multiple times, one-sixth of the time it will land on six. It's as if there's a hidden rule determining fate's chances.

Now imagine being a mathematician and devout believer living during this period. You might be interested in understanding the relationship between this hidden rule and God.

This was indeed the question asked by Bayes and Price themselves. They hoped that their solution would directly apply to proving "the world must be the result of wisdom and intelligence; therefore providing evidence for God's existence as ultimate cause" - that is, cause without causality.

Surprisingly, around two years later in 1774, without having read Thomas Bayes' paper, the French mathematician Laplace wrote a paper titled "On the Causes of Events by Probability of Events", which is about inverse probability problems. On the first page, you can read the main principle:

"If an event can be caused by n different reasons, then the ratios between these causes' probabilities given the event are equal to the probabilities of events given these causes; and each cause's existence probability equals to the probability of causes given this event divided by total probabilities of events given each one of these causes."

This is what we know today as Bayes' theorem:

/upload/asset/16555cdd5713a05a003d.png

Where `P(θ)`

is a uniform distribution.

We will bring Bayesian statistics to the present by using Python and PyMC library, and conduct a simple experiment.

Suppose a friend gives you a coin and asks if you think it's a fair coin. Because he is in a hurry, he tells you that you can only toss the coin 10 times. As you can see, there is an unknown parameter `p`

in this problem, which is the probability of getting heads in tossing coins, and we want to estimate the most likely value of the `p`

.

(Note: We are not saying that parameter `p`

is a random variable but rather that this parameter is fixed; we want to know where it's most likely between.)

To have different views on this problem, we will solve it under two different prior beliefs:

- You have no prior information about the fairness of the coin, so you assign an equal probability to
`p`

. In this case, we will use what is called a non-informative prior because you haven't added any information to your beliefs.

For these two scenarios, our prior beliefs will be as follows:

After flipping a coin 10 times, you got heads twice. With this evidence, where are we likely to find our parameter `p`

?

As you can see, in the first case, our prior distribution of parameter `p`

is concentrated at the maximum likelihood estimate (MLE) `p=0.2`

, which is a method similar to that used by the frequency school. The true unknown parameter will be within the 95% confidence interval, between 0.04 and 0.48.

On the other hand, in cases where there is high confidence that parameter `p`

should be between 0.3 and 0.7, we can see that the posterior distribution is around 0.4, much higher than what our MLE gives us. In this case, the true unknown parameter will be within a 95% confidence interval between 0.23 and 0.57.

Therefore, in the first case scenario you would tell your friend with certainty that this coin isn't fair but in another situation you'd say it's uncertain whether or not it's fair.

As you can see even when faced with identical evidence (two heads out of ten tosses), under different prior beliefs results may vary greatly; one advantage of Bayesian statistics over traditional methods lies here: like scientific methodology it allows us to update our beliefs by combining them with new observations and evidence.

In today's article, we saw the origins of Bayesian statistics and its main contributors. Subsequently, there have been many other important contributors to this field of statistics (Jeffreys, Cox, Shannon and so on), reprinted from quantdare.com.

]]>Many users have their own customer live accounts that need managing and maintaining. When there are many customer live accounts, a more convenient way is needed for managing them (as few as dozens or as many as hundreds). FMZ provides a powerful extended API; using this for group control management has become an ideal choice.

Through FMZ's extended API, you can centrally monitor the trading activities and asset conditions of all live accounts. Whether it is checking the positions of each account, historical trading records, or real-time monitoring of the profit and loss status of accounts, all of them can be achieved.

// Global variablevar isLogMsg = true// Control whether the log is printedvar isDebug = false// Debug modevar arrIndexDesc = ["all", "running", "stop"] var descRobotStatusCode = ["In idle", "Running", "Stopping", "Exited", "Stopped", "Strategy error"] var dicRobotStatusCode = { "all" : -1, "running" : 1, "stop" : 4, } // Extended log functionfunctionLogControl(...args) { if (isLogMsg) { Log(...args) } } // FMZ extended API call functionfunctioncallFmzExtAPI(accessKey, secretKey, funcName, ...args) { var params = { "version" : "1.0", "access_key" : accessKey, "method" : funcName, "args" : JSON.stringify(args), "nonce" : Math.floor(newDate().getTime()) } var data = `${params["version"]}|${params["method"]}|${params["args"]}|${params["nonce"]}|${secretKey}` params["sign"] = Encode("md5", "string", "hex", data) var arrPairs = [] for (var k in params) { var pair = `${k}=${params[k]}` arrPairs.push(pair) } var query = arrPairs.join("&") var ret = nulltry { LogControl("url:", baseAPI + "/api/v1?" + query) ret = JSON.parse(HttpQuery(baseAPI + "/api/v1?" + query)) if (isDebug) { LogControl("Debug:", ret) } } catch(e) { LogControl("e.name:", e.name, "e.stack:", e.stack, "e.message:", e.message) } Sleep(100) // Control frequencyreturn ret } // Obtain all live trading information of the specified strategy Id.functiongetAllRobotByIdAndStatus(accessKey, secretKey, strategyId, robotStatusCode, maxRetry) { var retryCounter = 0var length = 100var offset = 0var arr = [] if (typeof(maxRetry) == "undefined") { maxRetry = 10 } while (true) { if (retryCounter > maxRetry) { LogControl("Exceeded the maximum number of retries", maxRetry) returnnull } var ret = callFmzExtAPI(accessKey, secretKey, "GetRobotList", offset, length, robotStatusCode) if (!ret || ret["code"] != 0) { Sleep(1000) retryCounter++ continue } var robots = ret["data"]["result"]["robots"] for (var i in robots) { if (robots[i].strategy_id != strategyId) { continue } arr.push(robots[i]) } if (robots.length < length) { break } offset += parseInt(ret["data"]["result"]["concurrent"]) } return arr } functionmain() { var robotStatusCode = dicRobotStatusCode[arrIndexDesc[robotStatus]] var robotList = getAllRobotByIdAndStatus(accessKey, secretKey, strategyId, robotStatusCode) if (!robotList) { Log("Failed to obtain live trading data") } var robotTbl = {"type": "table", "title": "live trading list", "cols": [], "rows": []} robotTbl.cols = ["live trading Id", "live trading name", "live trading status", "strategy name", "live trading profit"] _.each(robotList, function(robotInfo) { robotTbl.rows.push([robotInfo.id, robotInfo.name, descRobotStatusCode[robotInfo.status], robotInfo.strategy_name, robotInfo.profit]) }) LogStatus(_D(), "`" + JSON.stringify(robotTbl) + "`") }

Strategy parameter design:

Running on live trading:

Group control management makes it very convenient to execute transactions with one-click. You can buy, sell, and close positions on multiple live trading accounts simultaneously without having to open each account individually. This not only improves execution efficiency, but also reduces the possibility of operational errors.

After obtaining the list of live trading accounts, we can send commands to these accounts and perform a series of predetermined operations. For example: clearing positions in the live account, pausing protection in the live account, switching modes in the live account. All these can be achieved through FMZ's extended API `CommandRobot`

.

As we continue writing code, we just need to add some interactions and calls to the extended API interface `CommandRobot`

in our main function:

functionmain() { var robotStatusCode = dicRobotStatusCode[arrIndexDesc[robotStatus]] var robotList = getAllRobotByIdAndStatus(accessKey, secretKey, strategyId, robotStatusCode) if (!robotList) { Log("Failed to obtain live trading data") } var robotTbl = {"type": "table", "title": "live trading list", "cols": [], "rows": []} robotTbl.cols = ["live trading Id", "live trading name", "live trading status", "strategy name", "live trading profit"] _.each(robotList, function(robotInfo) { robotTbl.rows.push([robotInfo.id, robotInfo.name, descRobotStatusCode[robotInfo.status], robotInfo.strategy_name, robotInfo.profit]) }) LogStatus(_D(), "`" + JSON.stringify(robotTbl) + "`") while(true) { LogStatus(_D(), ", Waiting to receive interactive commands", "\n", "`" + JSON.stringify(robotTbl) + "`") var cmd = GetCommand() if (cmd) { var arrCmd = cmd.split(":") if (arrCmd.length == 1 && cmd == "coverAll") { _.each(robotList, function(robotInfo) { var strCmd = "Clearance"// You can define the required message formatif (robotInfo.status != 1) { // Only the "live" trading platform can receive commands.return } var ret = callFmzExtAPI(accessKey, secretKey, "CommandRobot", parseInt(robotInfo.id), strCmd) LogControl("Send command to the live trading board with id: ", robotInfo.id, ":", strCmd, ", execution result:", ret) }) } } Sleep(1000) } }

The group control strategy sent instructions to "Test 1 A" and "Test 1 B".

With FMZ's extended API, you can easily implement batch modifications of strategy parameters, and batch start or stop live trading.

In quantitative trading, by using FMZ's extended API for group control management, traders can monitor, execute and adjust multiple live accounts more efficiently. This centralized management method not only improves operational efficiency, but also helps to better implement risk control and strategy synchronization.

For traders managing a large number of live accounts, FMZ's extended API provides them with a powerful and flexible tool that makes quantitative trading more convenient and controllable.

]]>Download the 4h K-line data of Binance's perpetual contract for the year 2023. The specific download code is introduced in the previous article: https://www.fmz.com/bbs-topic/10286. The listing time does not necessarily coincide with the 4-hour mark, which is a bit imprecise. However, the price at the start of trading is often chaotic. Using fixed intervals can filter out the impact of market opening without delaying analysis. In the data dataframe, NaN represents no data; once the first piece of data appears, it means that this coin has been listed. Here we calculate every 4 hours after listing relative to the first price increase and form a new table. Those already listed from the beginning are filtered out. As of November 16, 2023, Binance has listed a total of 86 currencies, averaging more than one every three days - quite frequent indeed.

The following is the specific processing code, where only data within 150 days of going live has been extracted.

df = df_close/df_close.fillna(method='bfill').iloc[0] price_changes = {} for coin in df.columns[df.iloc[0].isna()]: listing_time = df[coin].first_valid_index() price_changes[coin] = df[coin][df.index>listing_time].values changes_df = pd.DataFrame.from_dict(price_changes, orient='index').T changes_df.index = changes_df.index/6 changes_df = changes_df[changes_df.index<150] changes_df.mean(axis=1).plot(figsize=(15,6),grid=True);

The results are shown in the following graph, where the horizontal axis represents the number of days on the shelf and the vertical axis represents the average index. This result can be said to be unexpected but reasonable. Surprisingly, after new contracts are listed, they almost all fall, and the longer they are listed, the more they fall. At least within half a year there is no rebound. But it's also reasonable to think about it; so-called listing benefits have been realized before listing, and subsequent continuous declines are normal. If you open up a K-line chart to look at weekly lines, you can also find that many newly-listed contract currencies follow this pattern - opening at their peak.

The previous article has mentioned that digital currencies are greatly affected by simultaneous rises and falls. Does the overall index's decline affect their performance? Here, let's change the price changes to be relative to the index changes and look at the results again. From what we see on the graph, it still looks the same - a continuous decline. In fact, it has declined even more compared to the index.

total_index = df.mean(axis=1) df = df.divide(total_index,axis=0)

By analyzing the relationship between the number of currencies listed each week and the index, we can clearly see Binance's listing strategy: frequent listings during a bull market, few listings during a bear market. February and October of this year were peak periods for listings, coinciding with bull markets. During times when the market was falling quite badly, Binance hardly listed any new contracts. It is evident that Binance also wants to take advantage of high trading volumes in bull markets and active new contracts to earn more transaction fees. They don't want new contracts to fall too badly either, but unfortunately, they can't always control it.

This article analyzes the 4h K-line data of Binance's perpetual contracts for the year 2023, showing that newly listed contracts tend to decline over a long period. This may reflect the market's gradual cooling off from initial enthusiasm and return to rationality. If you design a strategy to short a certain amount of funds on the first day of trading, and close out after holding for some time, there is a high probability of making money. Of course, this also carries risks; past trends do not represent the future. But one thing is certain: there is no need to chase hot spots or go long on newly listed contract currencies.

]]>The digital currency market is known for its volatility and uncertainty. Bitcoin and Ethereum, as the two giants in the market, often play a leading role in price trends. Most small or emerging digital currencies, in order to maintain market competitiveness and trading activity, often keep a certain degree of price synchronization with these mainstream currencies, especially those coins made by project parties. This synchronicity reflects the psychological expectations and trading strategies of market participants, which are important considerations in designing quantitative trading strategies.

In the field of quantitative trading, the measurement of correlation is achieved through statistical methods. The most commonly used measure is the Pearson correlation coefficient, which measures the degree of linear correlation between two variables. Here are some core concepts and calculation methods:

The range of the Pearson Correlation Coefficient (denoted as r) is from -1 to +1, where +1 indicates a perfect positive correlation, -1 indicates a perfect negative correlation, and 0 indicates no linear relationship. The formula for calculating this coefficient is as follows:

Among them,

and

are the observed values of two random variables,

and

are the average values of these two random variables respectively. Using Python scientific computing related packages, it's easy to calculate correlation.

This article has collected the 4h K-line data for the entire year of 2023 from Binance, selecting 144 currencies that were listed on January 1st. The specific code to download the data is as follows:

import requests from datetime import date,datetime import time import pandas as pd import numpy as np import matplotlib.pyplot as plt ticker = requests.get('https://fapi.binance.com/fapi/v1/ticker/24hr') ticker = ticker.json() sort_symbols = [k['symbol'][:-4] for k insorted(ticker, key=lambda x :-float(x['quoteVolume'])) if k['symbol'][-4:] == 'USDT'] defGetKlines(symbol='BTCUSDT',start='2020-8-10',end='2023-8-10',period='1h',base='fapi',v = 'v1'): Klines = [] start_time = int(time.mktime(datetime.strptime(start, "%Y-%m-%d").timetuple()))*1000 + 8*60*60*1000 end_time = min(int(time.mktime(datetime.strptime(end, "%Y-%m-%d").timetuple()))*1000 + 8*60*60*1000,time.time()*1000) intervel_map = {'m':60*1000,'h':60*60*1000,'d':24*60*60*1000} while start_time < end_time: time.sleep(0.5) mid_time = start_time+1000*int(period[:-1])*intervel_map[period[-1]] url = 'https://'+base+'.binance.com/'+base+'/'+v+'/klines?symbol=%s&interval=%s&startTime=%s&endTime=%s&limit=1000'%(symbol,period,start_time,mid_time) res = requests.get(url) res_list = res.json() iftype(res_list) == listandlen(res_list) > 0: start_time = res_list[-1][0]+int(period[:-1])*intervel_map[period[-1]] Klines += res_list iftype(res_list) == listandlen(res_list) == 0: start_time = start_time+1000*int(period[:-1])*intervel_map[period[-1]] if mid_time >= end_time: break df = pd.DataFrame(Klines,columns=['time','open','high','low','close','amount','end_time','volume','count','buy_amount','buy_volume','null']).astype('float') df.index = pd.to_datetime(df.time,unit='ms') return df start_date = '2023-01-01' end_date = '2023-11-16' period = '4h' df_dict = {} for symbol in sort_symbols: print(symbol) df_s = GetKlines(symbol=symbol+'USDT',start=start_date,end=end_date,period=period) ifnot df_s.empty: df_dict[symbol] = df_s df_close = pd.DataFrame(index=pd.date_range(start=start_date, end=end_date, freq=period),columns=df_dict.keys()) for symbol in symbols: df_s = df_dict[symbol] df_close[symbol] = df_s.close df_close = df_close.dropna(how='any',axis=1)

After normalizing the data first, we calculate the index of average price fluctuations. It can be seen that there are two market trends in 2023. One is a significant increase at the beginning of the year, and the other is a major rise starting from October. Currently, it's basically at a high point in terms of index.

df_norm = df_close/df_close.fillna(method='bfill').iloc[0] #Normalization total_index = df_norm.mean(axis=1) total_index.plot(figsize=(15,6),grid=True);

Pandas comes with a built-in correlation calculation. The weakest correlation with BTC price is shown in the following figure. Most currencies have a positive correlation, meaning they follow the price of BTC. However, some currencies have a negative correlation, which is considered an anomaly in digital currency market trends.

corr_symbols = df_norm.corrwith(df_norm.BTC).sort_values().index

Here, the currencies are loosely divided into two groups. The first group consists of 40 currencies most correlated with BTC price, and the second group includes those least related to BTC price. By subtracting the index of the second group from that of the first, it represents going long on the first group while shorting the second one. In this way, we can calculate a relationship between price fluctuations and BTC correlation. Here is how you do it along with results:

(df_norm[corr_symbols[-40:]].mean(axis=1)-df_norm[corr_symbols[:40]].mean(axis=1)).plot(figsize=(15,6),grid=True);

The results show that the currencies with stronger correlation to BTC price have better increases, and shorting currencies with low correlation also played a good hedging role. The imprecision here is that future data was used when calculating the correlation. Below, we divide the data into two groups: one group calculates the correlation, and another calculates the return after hedging. The result is shown in the following figure, and the conclusion remains unchanged.

Bitcoin and Ethereum as market leaders often have a huge impact on overall market trends. When these cryptocurrencies rise in price, market sentiment usually becomes optimistic and many investors tend to follow this trend. Investors may see this as a signal of an overall market increase and start buying other currencies. Due to collective behavior of market participants, currencies highly correlated with mainstream ones might experience similar price increases. At such times, expectations about price trends can sometimes become self-fulfilling prophecies. On the contrary, currencies negatively correlated with Bitcoin are unique; their fundamentals may be deteriorating or they may no longer be within sight of mainstream investors - there could even exist Bitcoin's blood-sucking situation where markets abandon them chasing for those able to keep up with rising prices.

corr_symbols = (df_norm.iloc[:1500].corrwith(df_norm.BTC.iloc[:1500])-df_norm.iloc[:1500].corrwith(total_index[:1500])).sort_values().index

This article discusses the Pearson correlation coefficient, revealing the degree of correlation between different currencies. The article demonstrates how to obtain data to calculate the correlation between currencies and use this data to assess market trends. It reveals that synchronicity in price fluctuations in the digital currency market not only reflects market psychology and strategy, but can also be quantified and predicted through scientific methods. This is particularly important for designing quantitative trading strategies.

There are many areas where the ideas in this article can be expanded, such as calculating rolling correlations, separately calculating correlations during rises and falls, etc., which can yield a lot of useful information.

]]>Exchange order book balance refers to the relative balance state between buy and sell orders in an exchange. The order book is a real-time record of all pending buy and sell orders on the market. This includes orders from buyers and sellers who are willing to trade at different prices.

Below are some key concepts related to exchange order book balance:

- Buyer and Seller Orders: Buyer orders in the order book represent investors willing to purchase assets at a specific price, while seller orders represent investors willing to sell assets at a specific price.
- Order Book Depth: Order book depth refers to the number of orders on both the buyer and seller sides. A greater depth indicates there are more buy and sell orders in the market, which may be more liquid.
- Transaction Price and Transaction Volume: The transaction price is the price of the most recent trade, while the transaction volume is the quantity of assets traded at that price. The transaction price and volume are determined by the competition between buyers and sellers in the order book.
- Order Book Imbalance: Order book imbalance refers to the discrepancy between the number of buy and sell orders or total transaction volume. This can be determined by examining the depth of the order book, if one side has significantly more orders than the other, there may be an order book imbalance.
- Market Depth Chart: The market depth chart graphically presents the depth and balance of the order book. Typically, the number of orders from buyers and sellers is displayed on the price level in a bar chart or other visual ways.
- Factors Affecting the Price: The balance of the order book directly affects market prices. If there are more buy orders, it may push up the price; on the contrary, if there are more sell orders, it may cause a drop in price.
- High-frequency Trading and Algorithmic Trading: Order book balance is crucial for high-frequency trading and algorithmic trading, as they rely on real-time order book data to make decisions, aiming to seize market opportunities quickly.

Understanding the balance of order books is important for investors, traders, and market analysts, because it provides useful information about market liquidity, potential price direction, and market trends.

A key idea when analyzing limit order books is to determine whether the overall market tends to buy or sell. This concept is known as imbalance in trading volume.

The imbalance in trading volume at time t is defined as:

Where,

represents the transaction volume of the best buy order at time t,

represents the transaction volume of the best sell order at time t. We can interpret ρt close to 1 as strong buying pressure, and ρt close to -1 as strong selling pressure. This only considers the transaction volumes of orders placed at the best buy price and best sell price, that is, L1 order book.

Imbalance in trading volume and price changes. The graph shows the imbalance of tiered trading volumes (x-axis) and the average value of future price movements, standardized by price difference (y-axis). The dataset is a quarter's order flow from a certain market. There seems to be a linear relationship between first-level order imbalance and future price changes. However, on average, future price changes are within the bid-ask spread.

The imbalance in trading volume ρt will be divided into the following three paragraphs:

It was discovered that these segments can predict future price changes:

Regarding the predictive ability of volume imbalance, an analysis was conducted on the tick-by-tick order book of a certain commodity from January 2014 to December 2014. For each arriving market order (MO), the volume imbalance was recorded and segmented according to the number of ticks in which the mid-price changed within the next 10 milliseconds. The chart shows the distribution and mid-price changes for each segment. We can see that positive price changes are more likely to occur before order books with greater buying pressure. Similarly, negative changes are more likely to occur before order books with greater selling pressure.

The imbalance of trading volume focuses on the total trading volume in the limit order book. One drawback is that some of this volume may come from old orders, which contain less relevant information. We can instead focus on the trading volume of recent orders. This concept is known as order flow imbalance. You can achieve this by tracking individual markets and limit orders (requires Level 3 data) or observing changes in the limit order book.

Since Level 3 data is expensive and usually only available to institutional traders, we will focus on changes in the limit order book.

We can calculate the order flow imbalance by looking at how much the trading volumes have moved at best bid price and best ask price. The change in trading volume at best bid price is:

This is a function involving three scenarios. The first scenario is, if the best buying price is higher than the previous best buying price, then all transaction volumes are new transaction volumes.

The second scenario is, if the best buying price is the same as the previous best buying price, then the new transaction volume is the difference between the current total transaction volume and the previous total transaction volume.

The third scenario is, if the best buying price is lower than the previous best buying price, then all previous orders have been traded and are no longer in the order book.

The calculation method for the change in transaction volume at the best selling price is similar:

The net order flow imbalance (OFI) at time t is given by the following formula:

This will be a positive value when there are more buy orders, and a negative value when there are more sell orders. It measures both the quantity and direction of the transaction volume. In the previous part, order imbalance only measured direction without measuring the quantity of transactions.

You can add these values to get the net order flow imbalance (OFI) over a period of time:

Use regression models to test whether order flow imbalance contains information about future price changes:

The calculated OFI value above focuses on the best buying price and selling price. In part 4, the values of the top 5 best prices were also calculated, providing 5 inputs instead of just one. They found that a deep study of the order book can provide new information for future price changes.

Here, I have summarized some key insights from papers studying the order volume in limit order books. These papers indicate that the order book contains information highly predictive of future price changes. However, these changes cannot overcome the bid-ask spread.

I have added links to the papers in the references section. Please refer to them for more detailed information.

References & Notes

- Álvaro Cartea, Ryan Francis Donnelly, and Sebastian Jaimungal: "Enhancing Trading Strategies with Order Book Signals" Applied Mathematical Finance 25(1) pp. 1–35 (2018)
- Alexander Lipton, Umberto Pesavento, and Michael G Sotiropoulos: "Trade arrival dynamics and quote imbalance in a limit order book" arXiv (2013)
- Álvaro Cartea, Sebastian Jaimungal, and J. Penalva: "Algorithmic and high-frequency trading." Cambridge University Press
- Ke Xu, Martin D. Gould, and Sam D. Howison: "Multi-Level Order-Flow Imbalance in a Limit Order Book" arXiv (2019)

Reprinted from: Author ~ {Leigh Ford, Adrian}.

]]>The Modern Portfolio Theory (MPT), proposed by Harry Markowitz in 1952, is a mathematical framework for portfolio selection. It aims to maximize expected returns by choosing different combinations of risk assets while controlling risk. The core idea is that the prices of assets do not move completely in sync with each other (i.e., there is incomplete correlation between assets), and overall investment risk can be reduced through diversified asset allocation.

**Expected Return Rate**: This is the return that investors expect to receive from holding assets or an investment portfolio, usually predicted based on historical return data.

Where,

is the expected return of the portfolio,

is the weight of the i-th asset in the portfolio,

is the expected return of the i-th asset.

**Risk (Volatility or Standard Deviation)**: Used to measure the uncertainty of investment returns or the volatility of investments.

Where,

represents the total risk of the portfolio,

is the covariance of asset i and asset j, which measures the price change relationship between these two assets.

**Covariance**: Measures the mutual relationship between price changes of two assets.

Where,

is the correlation coefficient of asset i and asset j,

and

are respectively the standard deviations of asset i and asset j.

**Efficient Frontier**: In the risk-return coordinate system, the efficient frontier is the set of investment portfolios that can provide the maximum expected return at a given level of risk.

The diagram above is an illustration of an efficient frontier, where each point represents a different weighted investment portfolio. The x-axis denotes volatility, which equates to the level of risk, while the y-axis signifies return rate. Clearly, our focus lies on the upper edge of the graph as it achieves the highest returns at equivalent levels of risk.

In quantitative trading and portfolio management, applying these principles requires statistical analysis of historical data and using mathematical models to estimate expected returns, standard deviations and covariances for various assets. Then optimization techniques are used to find the best asset weight allocation. This process often involves complex mathematical operations and extensive computer processing - this is why quantitative analysis has become so important in modern finance. Next, we will illustrate how to optimize with specific Python examples.

Calculating the Markowitz optimal portfolio is a multi-step process, involving several key steps, such as data preparation, portfolio simulation, and indicator calculation. Please refer to: https://plotly.com/python/v3/ipython-notebooks/markowitz-portfolio-optimization/

**Obtain market data**:

Through the `get_data`

function, obtain the historical price data of the selected digital currency. This is the necessary data for calculating returns and risks, which are used to build investment portfolios and calculate Sharpe ratios.

**Calculate Return Rate and Risk**:

The `calculate_returns_risk`

function was used to compute the annualized returns and annualized risk (standard deviation) for each digital currency. This is done to quantify the historical performance of each asset for use in an optimal portfolio.

**Calculate Markowitz Optimal Portfolio**:

The `calculate_optimal_portfolio`

function was used to simulate multiple investment portfolios. In each simulation, asset weights were randomly generated and then the expected return and risk of the portfolio were calculated based on these weights.

By randomly generating combinations with different weights, it is possible to explore multiple potential investment portfolios in order to find the optimal one. This is one of the core ideas of Markowitz's portfolio theory.

The purpose of the entire process is to find the investment portfolio that yields the best expected returns at a given level of risk. By simulating multiple possible combinations, investors can better understand the performance of different configurations and choose the combination that best suits their investment goals and risk tolerance. This method helps optimize investment decisions, making investments more effective.

import numpy as np import pandas as pd import requests import matplotlib.pyplot as plt # Obtain market datadefget_data(symbols): data = [] for symbol in symbols: url = 'https://api.binance.com/api/v3/klines?symbol=%s&interval=%s&limit=1000'%(symbol,'1d') res = requests.get(url) data.append([float(line[4]) for line in res.json()]) return data defcalculate_returns_risk(data): returns = [] risks = [] for d in data: daily_returns = np.diff(d) / d[:-1] annualized_return = np.mean(daily_returns) * 365 annualized_volatility = np.std(daily_returns) * np.sqrt(365) returns.append(annualized_return) risks.append(annualized_volatility) return np.array(returns), np.array(risks) # Calculate Markowitz Optimal Portfoliodefcalculate_optimal_portfolio(returns, risks): n_assets = len(returns) num_portfolios = 3000 results = np.zeros((4, num_portfolios), dtype=object) for i inrange(num_portfolios): weights = np.random.random(n_assets) weights /= np.sum(weights) portfolio_return = np.sum(returns * weights) portfolio_risk = np.sqrt(np.dot(weights.T, np.dot(np.cov(returns, rowvar=False), weights))) results[0, i] = portfolio_return results[1, i] = portfolio_risk results[2, i] = portfolio_return / portfolio_risk results[3, i] = list(weights) # Convert weights to a listreturn results symbols = ['BTCUSDT','ETHUSDT', 'BNBUSDT','LINKUSDT','BCHUSDT','LTCUSDT'] data = get_data(symbols) returns, risks = calculate_returns_risk(data) optimal_portfolios = calculate_optimal_portfolio(returns, risks) max_sharpe_idx = np.argmax(optimal_portfolios[2]) optimal_return = optimal_portfolios[0, max_sharpe_idx] optimal_risk = optimal_portfolios[1, max_sharpe_idx] optimal_weights = optimal_portfolios[3, max_sharpe_idx] # Output resultsprint("Optimal combination:") for i inrange(len(symbols)): print(f"{symbols[i]} Weight: {optimal_weights[i]:.4f}") print(f"Expected return rate: {optimal_return:.4f}") print(f"Expected risk (standard deviation): {optimal_risk:.4f}") print(f"Sharpe ratio: {optimal_return / optimal_risk:.4f}") # Visualized investment portfolio plt.figure(figsize=(10, 5)) plt.scatter(optimal_portfolios[1], optimal_portfolios[0], c=optimal_portfolios[2], marker='o', s=3) plt.title('portfolio') plt.xlabel('std') plt.ylabel('return') plt.colorbar(label='sharp') plt.show()

Final output result:

Optimal combination:

Weight of BTCUSDT: 0.0721

Weight of ETHUSDT: 0.2704

Weight of BNBUSDT: 0.3646

Weight of LINKUSDT: 0.1892

Weight of BCHUSDT: 0.0829

Weight of LTCUSDT: 0.0209

Expected return rate: 0.4195

Expected risk (standard deviation): 0.1219

Sharpe ratio: 3.4403

In programmatic trading, it is often necessary to calculate averages and variances, such as calculating moving averages and volatility indicators. When we need high-frequency and long-term calculations, it's necessary to retain historical data for a long time, which is both unnecessary and resource-consuming. This article introduces an online updating algorithm for calculating weighted averages and variances, which is particularly important for processing real-time data streams and dynamically adjusting trading strategies, especially high-frequency strategies. The article also provides corresponding Python code implementation to help traders quickly deploy and apply the algorithm in actual trading.

If we use

to represent the average value of the nth data point, assuming that we have already calculated the average of n-1 data points

, now we receive a new data point

. We want to calculate the new average number

including the new data point. The following is a detailed derivation.

The variance update process can be broken down into the following steps:

As can be seen from the above two formulas, this process allows us to update new averages and variances upon receiving each new data point

by only retaining the average and variance of the previous data, without saving historical data, making calculations more efficient. However, the problem is that what we calculate in this way are the mean and variance of all samples, while in actual strategies, we need to consider a certain fixed period. Observing the above average update shows that the amount of new average updates is a deviation between new data and past averages multiplied by a ratio. If this ratio is fixed, it will lead to an exponentially weighted average, which we will discuss next.

The exponential weighted average can be defined by the following recursive relationship:

Among them,

is the exponential weighted average at time point t,

is the observed value at time point t, α is the weight factor, and

is the exponential weighted average of the previous time point.

Regarding variance, we need to calculate the exponential weighted average of squared deviations at each time point. This can be achieved through the following recursive relationship:

Among them,

is the exponential weighted variance at time point t, and

is the exponential weighted variance of the previous time point.

Observe the exponentially weighted average and variance, their incremental updates are intuitive, retaining a portion of past values and adding new changes. The specific derivation process can be referred to in this paper: stats.pdf

The SMA (also known as the arithmetic mean) and EMA are two common statistical measures, each with different characteristics and uses. The former one assigns equal weight to each observation, reflecting the central position of the data set. The latter one is a recursive calculation method that gives higher weight to more recent observations. The weights decrease exponentially as the distance from current time increases for each observation.

**Weight distribution**: The SMA assigns the same weight to each data point, while the EMA gives higher weight to the most recent data points.**Sensitivity to new information**: The SMA is not sensitive enough to newly added data, as it involves recalculating all data points. The EMA, on the other hand, can reflect changes in the latest data more quickly.**Computational complexity**: The calculation of the SMA is relatively straightforward, but as the number of data points increases, so does the computational cost. The computation of the EMA is more complex, but due to its recursive nature, it can handle continuous data streams more efficiently.

Although SMA and EMA are conceptually different, we can make the EMA approximate to a SMA containing a specific number of observations by choosing an appropriate α value. This approximate relationship can be described by the effective sample size, which is a function of the weight factor α in the EMA.

SMA is the arithmetic average of all prices within a given time window. For a time window N, the centroid of the SMA (i.e., the position where the average number is located) can be considered as:

the centroid of SMA

EMA is a type of weighted average where the most recent data points have greater weight. The weight of EMA decreases exponentially over time. The centroid of EMA can be obtained by summing up the following series:

the centroid of EMA

When we assume that SMA and EMA have the same centroid, we can obtain:

To solve this equation, we can obtain the relationship between α and N.

This means that for a given SMA of N days, the corresponding α value can be used to calculate an "equivalent" EMA, so that they have the same centroid and the results are very similar.

Assume we have an EMA that updates every second, with a weight factor of

. This means that every second, the new data point will be added to the EMA with a weight of

, while the influence of old data points will be multiplied by

.

If we change the update frequency, such as updating once every f seconds, we want to find a new weight factor

, so that the overall impact of data points within f seconds is the same as when updated every second.

Within f seconds, if no updates are made, the impact of old data points will continuously decay f times, each time multiplied by

. Therefore, the total decay factor after f seconds is

.

In order to make the EMA updated every f seconds have the same decay effect as the EMA updated every second within one update period, we set the total decay factor after f seconds equal to the decay factor within one update period:

Solving this equation, we obtain new weight factors

This formula provides the approximate value of the new weight factor , which maintains the EMA smoothing effect unchanged when the update frequency changes. For example: When we calculate the average price with a value of 0.001 and update it every 10 seconds, if it is changed to an update every second, the equivalent value would be approximately 0.01.

classExponentialWeightedStats: def__init__(self, alpha): self.alpha = alpha self.mu = 0 self.S = 0 self.initialized = Falsedefupdate(self, x): ifnot self.initialized: self.mu = x self.S = 0 self.initialized = Trueelse: temp = x - self.mu new_mu = self.mu + self.alpha * temp self.S = self.alpha * self.S + (1 - self.alpha) * temp * (x - self.mu) self.mu = new_mu @propertydefmean(self): return self.mu @propertydefvariance(self): return self.S # Usage example alpha = 0.05# Weight factor stats = ExponentialWeightedStats(alpha) data_stream = [] # Data streamfor data_point in data_stream: stats.update(data_point)

In high-frequency programmatic trading, the rapid processing of real-time data is crucial. To improve computational efficiency and reduce resource consumption, this article introduces an online update algorithm for continuously calculating the weighted average and variance of a data stream. Real-time incremental updates can also be used for various statistical data and indicator calculations, such as the correlation between two asset prices, linear fitting, etc., with great potential. Incremental updating treats data as a signal system, which is an evolution in thinking compared to fixed-period calculations. If your strategy still includes parts that calculate using historical data, consider transforming it according to this approach: only record estimates of system status and update the system status when new data arrives; repeat this cycle going forward.

]]>Thanks to the FMZ Platform, I will share more content related to quantitative development and work together with all traders to maintain the prosperity of the quant community.

- Do you often struggle to distinguish between trends and fluctuations?
- Have you been stopped out by the back-and-forth disorderly market?
- Are you having difficulty understanding the current market situation?
- Do you do trend trading and hope to filter out fluctuations?

Haha, you've come to the right place. Today, I will bring you the construction and application of market noise! As we all know, financial markets are full of noise. How to quantitatively model and depict market noise is very important. The depiction of noise can better help us distinguish the current state of the market and predict future possibilities!

PART1 Noise discrimination is very important for financial market trading.

The time series in the financial market is characterized by a high signal-to-noise ratio, most of the time, the market fluctuations are unclear, and even during trending markets, situations like taking four steps forward and three steps back often occur. Therefore, defining, identifying and classifying market noise in the financial market is very important and has practical significance. Kaufman's book provides a comprehensive explanation and modeling of this characteristic of noise.

PART2 Construction of Noise - ER Efficiency Coefficient

The net value of the starting and ending points of price changes divided by the sum of all pairwise price changes during the period.

The difference between point A and point B divided by the sum of the 7 intermediate movements.

It demonstrates the different noise levels exhibited by various price operation modes under the same price movement range. A straight line indicates no noise, minor fluctuations around the straight line represent medium noise, and large swings symbolize high noise.

PART3 Construction of Noise - Price Density

The definition here is: Drawing the high and low points of price movements over a period of time, pulling the highest and lowest prices during this period into a box. The so-called price density refers to the number of price points that can be accommodated within the box.

Compared to the ER efficiency coefficient, the measurement method of price density takes more into account the highest and lowest prices of each K-line.

PART4 Construction of Noise - Fractal Dimension

The fractal dimension cannot be measured accurately, but it can be estimated using the following steps within the past n terms:

PART5 Construction of Noise - Other Methods

CMI = (close[0] - open[n-1]) / (Max high(n) - Min low(n));

When the noise is lower, during this period, the net value at the beginning and end infinitely approaches the difference between the highest and lowest prices, with CMI infinitely approaching 1.

The results obtained from the construction methods of various noise measurements are highly similar. The core is to compare the net changes and change processes or extreme values of a period of movement, and choose the construction method that you prefer or think is more reasonable.

PART6 Dividing market styles from the perspectives of noise and volatility.

Volatility and noise are different dimensions to characterize the market. The sum of price changes in the two types of price models mentioned above is the same, so their volatility is the same, but net value changes more significantly and noise is lower.

Therefore, noise and volatility are two different perspectives that can be used to classify market styles. If we take persistence and volatility of trends as the x-axis and y-axis respectively to construct a Cartesian coordinate system, we can divide the fluctuation status of market prices into four categories:

- Good sustainability, high volatility - smooth trend.
- Good sustainability, low volatility - bumpy trend.
- Persistent poor performance, low volatility - narrow range consolidation.
- Persistent poor performance, high volatility - wide range fluctuations.

It should be pointed out that there are no absolute standards for what is called wide range and narrow range, it has to be relative to the level and system of one's own trading, just like the setting of the trading period, which is extremely personalized. Moreover, we can only determine the current state of the market by examining a period in the past. However, we cannot predict what state the market will enter next.

Of course, the four types of fluctuations are not completely random during conversion. In the most ideal state, a smooth trend is often followed by wide-range oscillations, slowly unloading momentum; then it enters narrow-range consolidation, the market is very inactive, and bulls and bears are stuck in a stalemate; when the market is compressed to a critical point, it explodes again and the trend begins; this is an oversimplified ideal model - reality is much more complex. For example, after narrow-range consolidation there may not necessarily be a trend - it could also be wide-ranging oscillation. After a smooth trend there might not necessarily be wide-ranging oscillation - it could continue to reach new highs or lows. Moreover, it's difficult to develop four strategies that excel at handling four different market conditions and can adapt as needed. So for now, I still think we can only develop strategies that make money in certain markets while minimizing losses in unfavorable ones.

PART7 Impact of Noise on Related Transactions

The profit factor of the 40-day moving average strategy (going long above the 40-day line and short below, total profit/total loss) is regressed with the 40-day noise (ER efficiency coefficient). It can be seen that the higher the noise, the lower the profit factor of trend strategies. And we can conclude: low noise is beneficial for trend trading, high noise is beneficial for mean reversion trading.

The concept of market noise is very important in determining trading styles. Before developing corresponding trading strategies, we need to outline the contours of the market.

PART8 Market Maturity and Noise

Over the past 20 years, the noise attribute of the North American stock index market has experienced a steady rise.

Financial markets in various regions are gradually maturing, with noise levels increasing progressively, and the maturity is coming quickly.

A study was conducted on the stock index markets of various countries. The market on the far right is the most mature and also has higher noise, while the one on the far left is immature with lower noise. It can be observed that Japan has the most mature market, followed by economies like Hong Kong, China, Singapore, and South Korea. On the far left are relatively immature markets, such as Vietnam and Sri Lanka.

The noise in the Bitcoin market for each quarter is approximately 0.2-0.3, and it's in a cyclical state.

Thanks to the FMZ Platform, for not closing its doors and reinventing the wheel, but providing such a great place for traders to communicate. The road of trading is full of ups and downs, but with warmth from fellow traders and continuous learning from the shared experiences of seniors on the FMZ platform, we can keep growing. Wishing FMZ all the best and may all traders enjoy long-lasting profits.

]]>