Algorithm of emotional states
This library is designed to process the signal from BrainBit headband. It allows you to get the emotional state of a person - the degree of relaxation or concentration, as well as calculate the rhythms of the brain - alpha, beta, gamma, theta, delta. With its help you can understand how much the signal is suitable for calculations - it shows the presence of artifacts in the signal. In most cases, the presence of artifacts means a poor fit of the device to the skin.
The algorithm processes the data by a sliding window of a given length with a given frequency. If artifacts are detected on one of the bipolar channels, the artifacts on the second bipolar channel are checked, and if there are no artifacts, they are switched to that channel; in case of artifacts on both channels, the spectral values and values of mental levels are filled with previous actual values, while the counter of the number of successive artifact windows increases.
When the maximum number of consecutive artifact windows is reached, `MathLibIsArtifactedSequence()` returns true, which allows you to give the user information about the need to check the position of the device. This flag is usually raised 4 sec after receiving continuous artifacts. If there is no need to give notification of momentary artifacts, you can use this function as the primary for artifact notifications. Otherwise, use `MathLibIsBothSidesArtifacted()` to check for momentary artifacts, returning true for artifacts on both bipolar channels for the current window.
There are two options for calculating rhythm indices, absolute and relative. Absolute outputs raw values of micro-volts of spectrum power in the specified range for each rhythm. Or as a percentage of the total power of the entire range at any given time. Relative values are taken as some variation from the initial state, which is determined during the calibration phase of the algorithm. Relative values can also be represented as power and percentage. The advantage of relative values is a more pronounced change of these indicators in the wake of a person's state. But if during the calibration process the person will be in one of the pronounced states, for example relaxation, it will be very difficult to achieve even higher values. As for absolute values - their change is less noticeable and can be very small.
During the calibration phase, you should sit still with your eyes open and try not to think about anything.
To get the emotional state of a person you need to follow the following steps:
Download from [GitHub](https://github.com/BrainbitLLC/EmotionsStateArtifacts-cpp) all folders and add .dll to your project by your preferred way.
#### First way
1. Download `libem_st_artifacts.deb` from [GitHub](https://github.com/BrainbitLLC/linux_em_st_artifacts/tree/main/package).
2. Install package by `apt`:
```
sudo apt install ./libem_st_artifacts.deb
```
#### Second way
Download from [GitHub](https://github.com/BrainbitLLC/linux_em_st_artifacts/tree/main/raw) and add .so to your project by your preferred way.
Library built on Astra Linux CE 2.12.46, kernel 5.15.0-70-generic. Arch: x86_64 GNU/Linux
```
user@astra:~$ ldd --version
ldd (Debian GLIBC 2.28-10+deb10u1) 2.28
Copyright (C) 2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
```
That library depends from others library, for instance:
```
linux-vdso.so.1
libblkid.so.1
libc++.so.1
libc++abi.so.1
libc.so.6
libdl.so.2
libffi.so.6
libgcc_s.so.1
libgio-2.0.so.0
libglib-2.0.so.0
libgmodule-2.0.so.0
libgobject-2.0.so.0
libm.so.6
libmount.so.1
libpcre.so.3
libpthread.so.0
libresolv.so.2
librt.so.1
libselinux.so.1
libstdc++.so.6
libudev.so.1
libuuid.so.1
libz.so.1
ld-linux-x86-64.so.2
```
If you are using a OS other than ours, these dependencies will be required for the library. You can download dependencies from [here](https://github.com/BrainbitLLC/linux_neurosdk2/tree/main/dependencies).
The Android version is designed for APIs >= 21.
Neurosdk for android is distributed using JitPack as an aar library. Here is an example of adding SDK to an AndroidStudio project using gradle:
Add to `build.gradle` of project:
###### Groovy:
```groovy
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
...
maven { url 'https://jitpack.io' }
}
}
```
###### Kotlin:
```kotlin
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
...
maven { setUrl("https://jitpack.io") }
}
}
```
and to `build.gradle` of app:
###### Groovy:
```groovy
dependencies {
implementation 'com.github.BrainbitLLC:Emotional-state-artifacts:1.0.1'
}
```
###### Kotlin:
```kotlin
dependencies {
implementation("com.github.BrainbitLLC:Emotional-state-artifacts:1.0.1")
}
```
To prevent build errors add to build.gradle this settings:
###### Groovy:
```groovy
android {
packagingOptions {
pickFirst 'lib/x86_64/libc++_shared.so'
pickFirst 'lib/x86/libc++_shared.so'
pickFirst 'lib/arm64-v8a/libc++_shared.so'
pickFirst 'lib/armeabi-v7a/libc++_shared.so'
}
...
}
```
Latest version: `1.0.1`
The iOS version is designed for OS >= 12.0.
By Pods:
Add to Podspec file:
```
pod 'em-st-artifacts', '1.0.2'
```
And run `pod install` command.
You can install framework manually:
1. download `em-st-artifacts.xcframework` from [GitHub](https://github.com/BrainbitLLC/apple_em-st-artifacts)
2. add `em-st-artifacts.xcframework` to `Frameworks, Libraries, and Embedded Content` section of your project
3. set `Embedded` to `Embed & Sign`
Latest version: `1.0.2`
> Python package available for Windows, MacOS and Astra Linux
By pip:
```
pip install pyem-st-artifacts
```
Before you can use the library on linux, you must install the emotion library:
```
sudo apt install ./libem_st_artifacts.deb
```
It can be downloaded [here](https://github.com/BrainbitLLC/linux_em_st_artifacts/tree/main/package).
The package has the following structure:
- em_st_artifacts - the main package with the implementation of methods
- sample - file `sample.py`, sample of usage library
- libs - contain dll library files
EmotionalMath - one main module contains all methods of library
```python
from em_st_artifacts.emotional_math import EmotionalMath
```
Latest version: `1.0.4`
> Available for iOS, Android, Windows, MacOS platforms
1. Open Package Manager
2. Click "Add" menu and choose "Add package from GIT url...". A text box and an Add button appear.
3. Enter a https://github.com/BrainbitLLC/unity_em_st_artifacts.git in the text box and click Add.
If Unity was able to install the package successfully, the package now appears in the package list with the tag. If Unity was not able to install the package, the Unity Console displays an error message.
Latest version: `1.0.0`
> Available for iOS, Android and UWP platforms
Install latest version `EmStArtifacts` package from NuGet Gallery in your preferred way to a common project.
[NugetPackage](https://www.nuget.org/packages/EmStArtifacts)
Latest version: `1.0.6`
Install latest version `EmStArtifacts` package from NuGet Gallery in your preferred way.
Latest version: `1.0.6`
[NugetPackage](https://www.nuget.org/packages/EmStArtifacts)
The estimate of emotional states (mental levels - relaxation and concentration) is available in two variants:
1. immediate assessment through alpha and beta wave intensity (and theta in the case of independent assessment).
2. relative to the baseline calibration values of alpha and beta wave intensity
In both cases, the current intensity of the waves is defined as the average for the last N windows.
The algorithm starts processing the data after the first N seconds after connecting the device and when the minimum number of points for the spectrum calculation is accumulated.
When reading spectral and mental values an array of appropriate structures (`SpectralDataPercents` and `MindData`) of length is returned, which is determined by the number of new recorded points, signal frequency and analysis frequency.
In this version the filters are built-in and clearly defined:
BandStop_45_55, BandStop_55_65, BandStop_62, HighPass_10, LowPass_30
According to the results of calibration, the average base value of alpha and beta waves expression is determined in percent, which are further used to calculate the relative mental levels.
The library can operate in two modes - bipolar and multichannel. In bipolar mode only two channels are processed - left and right bipolar. In multichannel mode you can process for any number of channels using the same algorithms.
Structure `MathLibSettings` with fields:
1. sampling_rate - raw signal sampling frequency, Hz, integer value
2. process_win_freq - frequency of spectrum analysis and emotional levels, Hz, integer value
3. fft_window - spectrum calculation window length, integer value
4. n_first_sec_skipped - skipping the first seconds after connecting to the device, integer value
5. bipolar_mode - enabled bipolar mode, boolean value
6. squared_spectrum - mode of calculating spectral values of frequencies. If squared = true then values are calculated as the sum of squares of FFT bins of the spectrum of the interval of the corresponding frequency (e.g. alpha), if squared = false then as the sum of FFT bins. Boolean value
7. channels_number - count channels for multi-channel library mode, integer value
8. channel_for_analysis - in case of multichannel mode: channel by default for computing spectral values and emotional levels, integer value
`channels_number` and `channel_for_analysis` are not used explicitly for bipolar mode, you can leave the default ones.
The sampling rate parameter must match the sampling frequency of the device. For the BrainBit, this is 250 Hz.
Separate parameters:
1. MentalEstimationMode - type of evaluation of instant mental levels - disabled by default, boolean value
2. SpectNormalizationByBandsWidth - spectrum normalization by bandwidth - disabled by default, boolean value
Structure `ArtifactDetectSetting` with fields:
1. art_bord - threshold for the long amplitude artifact, mcV, integer value
2. allowed_percent_artpoints - percent of allowed artifact points in the window, integer value
3. raw_betap_limit - boundary for spectral artifact (beta power), detection of artifacts on the spectrum occurs by checking the excess of the absolute value of the raw beta wave power, integer value
4. total_pow_border - boundary for spectral artifact (in case of assessment by total power) and for channels signal quality estimation, integer value
5. global_artwin_sec - number of seconds for an artifact sequence, the maximum number of consecutive artifact windows (on both channels) before issuing a prolonged artifact notification / device position check, integer value
6. spect_art_by_totalp - assessment of spectral artifacts by total power, boolean value
7. hanning_win_spectrum - setting the smoothing of the spectrum calculation by Hamming, boolean value
8. hamming_win_spectrum - setting the smoothing of the spectrum calculation by Henning, boolean value
9. num_wins_for_quality_avg - number of windows for estimation of signals quality, by default = 100, which, for example, with process_win_freq=25Hz, will be equal to 4 seconds, integer value
Structure `ShortArtifactDetectSetting` with fields:
1. ampl_art_detect_win_size - the length of the sliding window segments for the detection of short-term amplitude artifacts, ms, integer value
2. ampl_art_zerod_area - signal replacement area of the previous non-artifact to the left and right of the extremum point, ms, integer value
3. ampl_art_extremum_border - threshold of the extremum considered to be artifactual, mcV, integer value
Structure `MentalAndSpectralSetting` with fields:
1. n_sec_for_instant_estimation - the number of seconds to calculate the values of mental levels, integer value
2. n_sec_for_averaging - spectrum averaging, integer value
Separate setting is the number of windows after the artifact with the previous actual value - to smooth the switching process after artifacts (`SkipWinsAfterArtifact`).
MathLibSetting mathLibSetting;
mathLibSetting.sampling_rate = 250;
mathLibSetting.process_win_freq = 25;
mathLibSetting.n_first_sec_skipped = 4;
mathLibSetting.fft_window = 1000;
mathLibSetting.bipolar_mode = true;
mathLibSetting.channels_number = 4;
mathLibSetting.channel_for_analysis = 0;
ArtifactDetectSetting artifactDetectSetting;
artifactDetectSetting.art_bord = 110;
artifactDetectSetting.allowed_percent_artpoints = 70;
artifactDetectSetting.total_pow_border = 40 * 1e7;
artifactDetectSetting.raw_betap_limit = 800000;
artifactDetectSetting.spect_art_by_totalp = false;
artifactDetectSetting.global_artwin_sec = 4;
artifactDetectSetting.num_wins_for_quality_avg = 125;
artifactDetectSetting.hanning_win_spectrum = true;
artifactDetectSetting.hamming_win_spectrum = false;
ShortArtifactDetectSetting shortArtifactDetectSetting;
shortArtifactDetectSetting.ampl_art_detect_win_size = 200;
shortArtifactDetectSetting.ampl_art_zerod_area = 200;
shortArtifactDetectSetting.ampl_art_extremum_border = 25;
MentalAndSpectralSetting mentalAndSpectralSetting;
mentalAndSpectralSetting.n_sec_for_averaging = 2;
mentalAndSpectralSetting.n_sec_for_instant_estimation = 4;
OpStatus opSt;
MathLib* tMathPtr = createMathLib(mathLibSetting, artifactDetectSetting, shortArtifactDetectSetting, mentalAndSpectralSetting, &opSt);
int samplingFrequency = 250;
var mls = new MathLibSetting
{
sampling_rate = samplingFrequency,
process_win_freq = 25,
n_first_sec_skipped = 4,
fft_window = samplingFrequency * 4,
bipolar_mode = true,
channels_number = 4,
channel_for_analysis = 0
};
var ads = new ArtifactDetectSetting
{
art_bord = 110,
allowed_percent_artpoints = 70,
raw_betap_limit = 800_000,
total_pow_border = 40 * 1e7;
global_artwin_sec = 4,
spect_art_by_totalp = true,
num_wins_for_quality_avg = 125,
hanning_win_spectrum = false,
hamming_win_spectrum = true
};
var sads = new ShortArtifactDetectSetting
{
ampl_art_detect_win_size = 200,
ampl_art_zerod_area = 200,
ampl_art_extremum_border = 25
};
var mss = new MentalAndSpectralSetting
{
n_sec_for_averaging = 2,
n_sec_for_instant_estimation = 4
};
var math = new EegEmotionalMath(mls, ads, sads, mss);
int samplingFrequency = 250;
MathLibSetting mls = new MathLibSetting(samplingFrequency, 25, samplingFrequency * 4, 4, true, 4, 0);
ArtifactDetectSetting ads = new ArtifactDetectSetting(110, 70, 800000,(int) (40*1e7), 4, true, true,false,125);
ShortArtifactDetectSetting sads = new ShortArtifactDetectSetting(200,200,25);
MentalAndSpectralSetting mss = new MentalAndSpectralSetting(2,4);
EmotionalMath tMathPtr = new EmotionalMath(mls, ads, sads, mss);
mls = lib_settings.MathLibSetting(sampling_rate=250,
process_win_freq=25,
n_first_sec_skipped=4,
fft_window=1000,
bipolar_mode=True,
squared_spectrum=True,
channels_number=4,
channel_for_analysis=0)
ads = lib_settings.ArtifactDetectSetting(art_bord=110,
allowed_percent_artpoints=70,
raw_betap_limit=800_000,
global_artwin_sec=4,
num_wins_for_quality_avg=125,
hamming_win_spectrum=True,
hanning_win_spectrum=False,
total_pow_border=400_000_000,
spect_art_by_totalp=True)
sads = lib_settings.ShortArtifactDetectSetting(ampl_art_detect_win_size=200,
ampl_art_zerod_area=200,
ampl_art_extremum_border=25)
mss = lib_settings.MentalAndSpectralSetting(n_sec_for_averaging=2,
n_sec_for_instant_estimation=4)
math = EmotionalMath(mls, ads, sads, mss)
let mathLibSetting = EMMathLibSetting(samplingRate: 250,
andProcessWinFreq: 25,
andFftWindow: 1000,
andNFirstSecSkipped: 4,
andBipolarMode: true,
andSquaredSpectrum: true,
andChannelsNumber: 1,
andChannelForAnalysis: 0)
let artifactDetectSetting = EMArtifactDetectSetting(artBord: 110,
andAllowedPercentArtpoints: 70,
andRawBetapLimit: 800000,
andTotalPowBorder: 80000000,
andGlobalArtwinSec: 4,
andSpectArtByTotalp: true,
andHanningWinSpectrum: false,
andHammingWinSpectrum: true,
andNumWinsForQualityAvg: 125)
let shortArtifactDetectSetting = ShortArtifactDetectSetting(ampl_art_detect_win_size: 200,
ampl_art_zerod_area: 200,
ampl_art_extremum_border: 25)
let mentalAndSpectralSetting = MentalAndSpectralSetting(n_sec_for_instant_estimation: 4,
n_sec_for_averaging: 2)
let emotionalMath = EMEmotionalMath(libSettings: mathLibSetting, andArtifactDetetectSettings: artifactDetectSetting, andShortArtifactDetectSettigns: shortArtifactDetectSetting, andMentalAndSpectralSettings: mentalAndSpectralSetting)
EMMathLibSetting* mathLibSetting = [[EMMathLibSetting alloc] initWithSamplingRate:250
andProcessWinFreq:25
andFftWindow:1000
andNFirstSecSkipped:4
andBipolarMode:true
andSquaredSpectrum:true
andChannelsNumber:1
andChannelForAnalysis:0];
EMArtifactDetectSetting* artifactDetectSetting = [[EMArtifactDetectSetting alloc] initWithArtBord:110
andAllowedPercentArtpoints:70
andRawBetapLimit:800000
andTotalPowBorder:80000000
andGlobalArtwinSec:4
andSpectArtByTotalp:true
andHanningWinSpectrum:false
andHammingWinSpectrum:true
andNumWinsForQualityAvg:125];
ShortArtifactDetectSetting shortArtifactDetectSetting;
shortArtifactDetectSetting.ampl_art_detect_win_size = 200;
shortArtifactDetectSetting.ampl_art_zerod_area = 200;
shortArtifactDetectSetting.ampl_art_extremum_border = 25;
MentalAndSpectralSetting mentalAndSpectralSetting;
mentalAndSpectralSetting.n_sec_for_averaging = 2;
mentalAndSpectralSetting.n_sec_for_instant_estimation = 4;
EMEmotionalMath* math = [[EMEmotionalMath alloc] initWithLibSettings:mathLibSetting
andArtifactDetetectSettings:artifactDetectSetting
andShortArtifactDetectSettigns:shortArtifactDetectSetting
andMentalAndSpectralSettings:mentalAndSpectralSetting];
Optional parameters
OpStatus opSt;
// setting calibration length
int calibration_length = 6;
MathLibSetCallibrationLength(tMathPtr, calibration_length, &opSt);
// type of evaluation of instant mental levels
bool independent_mental_levels = false;
MathLibSetMentalEstimationMode(tMathPtr, independent_mental_levels, &opSt);
// number of windows after the artifact with the previous actual value
int nwins_skip_after_artifact = 10;
MathLibSetSkipWinsAfterArtifact(tMathPtr, nwins_skip_after_artifact, &opSt);
// calculation of mental levels relative to calibration values
MathLibSetZeroSpectWaves(tMathPtr, true, 0, 1, 1, 1, 0, &opSt);
// spectrum normalization by bandwidth
MathLibSetSpectNormalizationByBandsWidth(tMathPtr, true);
// setting calibration length
int calibrationLength = 6;
math.SetCallibrationLength(calibrationLength);
// type of evaluation of instant mental levels
bool independentMentalLevels = false;
math.SetMentalEstimationMode(independentMentalLevels);
// number of windows after the artifact with the previous actual value
int nwinsSkipAfterArtifact = 10;
math.SetSkipWinsAfterArtifact(nwinsSkipAfterArtifact);
// calculation of mental levels relative to calibration values
math.SetZeroSpectWaves(true, 0, 1, 1, 1, 0);
// spectrum normalization by bandwidth
math.SetSpectNormalizationByBandsWidth(true);
// setting calibration length
int calibrationLength = 6;
math.setCallibrationLength(calibrationLength);
// type of evaluation of instant mental levels
boolean independentMentalLevels = false;
math.setMentalEstimationMode(independentMentalLevels);
// number of windows after the artifact with the previous actual value
int nwinsSkipAfterArtifact = 10;
math.setSkipWinsAfterArtifact(nwinsSkipAfterArtifact);
// calculation of mental levels relative to calibration values
math.setZeroSpectWaves(true, 0, 1, 1, 1, 0);
// spectrum normalization by bandwidth
math.setSpectNormalizationByBandsWidth(true);
# setting calibration length
calibration_length = 6
math.set_calibration_length(calibration_length)
# type of evaluation of instant mental levels
independent_mental_levels = False
math.set_mental_estimation_mode(independent_mental_levels)
# number of windows after the artifact with the previous actual value
nwins_skip_after_artifact = 10
math.set_skip_wins_after_artifact(nwins_skip_after_artifact)
# calculation of mental levels relative to calibration values
math.set_zero_spect_waves(True, 0, 1, 1, 1, 0)
# spectrum normalization by bandwidth
math.set_spect_normalization_by_bands_width(True)
// setting calibration length
let calibrationLength: Int32 = 6
math.setCallibrationLength(calibrationLength)
// type of evaluation of instant mental levels
let independentMentalLevels = false
math.setMentalEstimationMode(independentMentalLevels)
// number of windows after the artifact with the previous actual value
let nwinsSkipAfterArtifact: Int32 = 10
math.setSkipWinsAfterArtifact(nwinsSkipAfterArtifact)
// calculation of mental levels relative to calibration values
math.setZeroSpectWavesWithActive(true, andDelta: 0, andTheta: 1, andAlpha: 1, andBeta: 1, andGamma: 0)
// spectrum normalization by bandwidth
math.setSpectNormalizationByCoeffs(true)
// setting calibration length
int calibration_length = 6;
[math setCallibrationLength:calibration_length];
// type of evaluation of instant mental levels
bool independent_mental_levels = false;
[math setMentalEstimationMode:independent_mental_levels];
// number of windows after the artifact with the previous actual value
int nwins_skip_after_artifact = 10;
[math setSpectNormalizationByCoeffs:nwins_skip_after_artifact];
// calculation of mental levels relative to calibration values
[math setZeroSpectWavesWithActive:true andDelta:0 andTheta:1 andAlpha:1 andBeta:1 andGamma:0];
// spectrum normalization by bandwidth
[math setSpectNormalizationByBandsWidth:true];
Structure contains left and right bipolar values to bipolar library mode with fields:
1. LeftBipolar - left bipolar value, double value
2. RightBipolar - right bipolar value, double value
Structure contains array of values of channels with field:
1. channels - double array
Mental levels. Structure with fields:
1. Rel_Attention - relative attention value
2. Rel_Relaxation - relative relaxation value
3. Inst_Attention - instantiate attention value
4. Inst_Relaxation - instantiate relaxation value
Relative spectral values. Structure with double fields:
1. Delta
2. Theta
3. Alpha
4. Beta
5. Gamma
Side of current artifact. Enum with values:
1. LEFT
2. RIGHT
3. NONE
OpStatus opSt;
MathLibStartCalibration(tMathPtr, &opSt);
math.StartCalibration();
math.startCalibration();
math.start_calibration()
emotionalMath.startCalibration()
[math startCalibration];
void onBrainBitSignalDataReceived(Sensor *pSensor, BrainBitSignalData *pData, int32_t size, void *userData)
{
RawChannels* bipolars = new RawChannels[size];
for(int i = 0; i < size; i++){
bipolars[i].left_bipolar = pData[i].T3 - pData[i].O1;
bipolars[i].right_bipolar = pData[i].T4 - pData[i].O2;
}
MathLibPushData(tMathPtr, bipolars, size);
MathLibProcessDataArr(tMathPtr);
}
OpStatus outStatus;
BrainBitSignalDataListenerHandle lHandle = nullptr;
addSignalDataCallbackBrainBit(_sensor, onBrainBitSignalDataReceived, &lHandle, nullptr, &outStatus);
sensor.brainBitSignalDataReceived = data -> {
RawChannels[] bipolars = new RawChannels[data.length];
for (int i = 0; i < data.length; i++) {
bipolars[i].leftBipolar = data[i].T3 - data[i].O1;
bipolars[i].rightBipolar = data[i].T4 - data[i].O2;
}
math.pushData(bipolars);
math.processDataArr();
};
sensor.brainBitSignalDataReceived = Sensor.BrainBitSignalDataReceived { data ->
val bipolars = Array(data.size) {
RawChannels(data[it].t3 - data[it].o1, data[it].t4 - data[it].o2)
}
math.pushData(bipolars)
math.processDataArr()
}
private void onBrainBitSignalDataRecived(ISensor sensor, BrainBitSignalData[] data)
{
RawChannels[] bipolars = new RawChannels[data.Length];
for (var i = 0; i < data.Length; i++)
{
bipolars[i].LeftBipolar = data[i].T3 - data[i].O1;
bipolars[i].RightBipolar = data[i].T4 - data[i].O2;
}
math.pushData(bipolars);
math.processDataArr();
}
...
sensor.EventBrainBitSignalDataRecived += onBrainBitSignalDataRecived;
def on_brain_bit_signal_data_received(sensor, data):
raw_channels = []
for sample in data:
left_bipolar = sample.T3-sample.O1
right_bipolar = sample.T4-sample.O2
raw_channels.append(support_classes.RawChannels(left_bipolar, right_bipolar))
math.push_data(raw_channels)
math.process_data_arr()
...
sensor.signalDataReceived = on_brain_bit_signal_data_received
sensor?.setSignalDataCallbackBrainBit( { data in
var bipolars : [EMRawChannels] = []
for sample in data {
let bipolarElement = EMRawChannels(leftBipolar: sample.t3.doubleValue - sample.o1.doubleValue, andRightBipolar: sample.t4.doubleValue - sample.o2.doubleValue)
bipolars.append(bipolarElement!)
}
math.pushData(bipolars)
math.processDataArr()
})
[sensor setSignalDataCallbackBrainBit:^(NSArray<NTBrainBitSignalData *> * _Nonnull data) {
NSMutableArray<EMRawChannels*>* bipolars = [NSArray new];
for(int i = 0; i < data.count; i++){
double leftBipolar = [data[i].T3 doubleValue] - [data[i].O1 doubleValue];
double rigntBipolar = [data[i].T4 doubleValue] - [data[i].O2 doubleValue];
EMRawChannels* bipolarElement = [[EMRawChannels alloc] initWithLeftBipolar:leftBipolar andRightBipolar:rigntBipolar];
[bipolar addObject: bipolarElement];
}
[math pushData:bipolars];
[math processDataArr];
}];
RawChannelsArray* samples = new RawChannelsArray[SAMPLES_COUNT];
...
MathLibPushDataArr(tMathPtr, samples, SAMPLES_COUNT);
MathLibProcessDataArr(tMathPtr);
var samples = new RawChannelsArray[SAMPLES_COUNT];
math.PushDataArr(samples);
var samples = new RawChannelsArray[SAMPLES_COUNT];
math.pushDataArr(samples);
math.processDataArr();
samples = []
math.push_data_arr(samples)
math.process_data_arr()
var samples: [[NSNumber]] = [[]]
math.pushDataArr(samples)
math.processDataArr()
NSArray<NSArray<NSNumber*>*>* samples = [NSArray new];
[math pushDataArr:samples];
[math processDataArr];
OpStatus os;
bool calibrationFinished = false;
MathLibCalibrationFinished(tMathPtr, &calibrationFinished, &os);
// and calibration progress
int calibrationProgress = 0;
MathLibGetCallibrationPercents(tMathPtr, &calibrationProgress, &os);
bool calibrationFinished = math.CalibrationFinished();
// and calibration progress
int calibrationProgress = math.GetCallibrationPercents();
boolean calibrationFinished = math.calibrationFinished();
// and calibration progress
int calibrationProgress = math.getCallibrationPercents();
calibration_finished = math.calibration_finished()
# and calibration progress
calibration_progress = math.get_calibration_percents()
let calibrationFinished = math.calibrationFinished()
// and calibration progress
let calibrationProgress = math.getCallibrationPercents();
bool calibrationFinished = [math calibrationFinished];
// and calibration progress
UInt32 calibrationProgress = [math getCallibrationPercents];
OpStatus opSt;
int size = 0;
// Reading mental levels in percent
MathLibReadMentalDataArrSize(tMathPtr, &size, &opSt);
MindData* mental_data = new MindData[size];
MathLibReadMentalDataArr(tMathPtr, mental_data, &size, &opSt);
// Reading relative spectral values in percent:
MathLibReadSpectralDataPercentsArrSize(tMathPtr, &size, &opSt);
SpectralDataPercents* sp_data = new SpectralDataPercents[size];
MathLibReadSpectralDataPercentsArr(tMathPtr, sp_data, &size, &opSt);
// Reading mental levels in percent
MindData[] mentalData = math.ReadMentalDataArr();
// Reading relative spectral values in percent
SpectralDataPercents[] spData = math.ReadSpectralDataPercentsArr();
// Reading mental levels in percent
MindData[] mentalData = math.readMentalDataArr();
// Reading relative spectral values in percent
SpectralDataPercents[] spData = math.readSpectralDataPercentsArr();
# Reading mental levels in percent
mental_data = math.read_mental_data_arr()
# Reading relative spectral values in percent
sp_data = math.read_spectral_data_percents_arr()
// Reading mental levels in percent
let mindData = math.readMentalDataArr()
// Reading relative spectral values in percent
let spData = math.mathLibReadSpectralDataPercentsArr()
// Reading mental levels in percent
NSArray<EMMindData*>* mindData = [math readMentalDataArr];
// Reading relative spectral values in percent
NSArray<EMSpectralDataPercents*>* spData = [math MathLibReadSpectralDataPercentsArr];
if(MathLibIsBothSidesArtifacted(tMathPtr)){
// signal corruption
}
if(math.IsBothSidesArtifacted()){
// signal corruption
}
if(math.isBothSidesArtifacted()){
// signal corruption
}
if math.is_both_sides_artifacted():
# signal corruption
if(math.isBothSidesArtifacted()){
// signal corruption
}
if([math isBothSidesArtifacted]){
// signal corruption
}
if(MathLibIsArtifactedSequence(tMathPtr)){
// signal corruption
}
if(math.IsArtifactedSequence()){
// signal corruption
}
if(math.isArtifactedSequence()){
// signal corruption
}
if math.is_artifacted_sequence():
# signal corruption
if(math.isArtifactedSequence()){
// signal corruption
}
if([math isArtifactedSequence]){
// signal corruption
}
freeMathLib(tMathPtr);
math.Dispose();
math = null;
// native finalizer is run
del math
math = nil
math = nil;