Porównaj commity

...

No commits in common. "v2.0" and "master" have entirely different histories.
v2.0 ... master

99 zmienionych plików z 2763 dodań i 2724 usunięć

40
.gitignore vendored
Wyświetl plik

@ -1,10 +1,30 @@
*.iml
.gradle
/local.properties
/.idea
.DS_Store
/build
/captures
.externalNativeBuild
.cxx
local.properties
encode
decode
debug
*.o
11025.ppm
11025.wav
11025.dat
11025.gnu
16000.ppm
16000.wav
16000.dat
16000.gnu
40000.ppm
40000.wav
40000.dat
40000.gnu
44100.ppm
44100.wav
44100.dat
44100.gnu
48000.ppm
48000.wav
48000.dat
48000.gnu
8000.ppm
8000.wav
8000.dat
8000.gnu
????-??-??_??:??:??.ppm
.*.swp

121
COPYING 100644
Wyświetl plik

@ -0,0 +1,121 @@
Creative Commons Legal Code
CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without fear
of later claims of infringement build upon, modify, incorporate in other
works, reuse and redistribute as freely as possible in any form whatsoever
and for any purposes, including without limitation commercial purposes.
These owners may contribute to the Commons to promote the ideal of a free
culture and the further production of creative, cultural and scientific
works, or to gain reputation or greater distribution for their Work in
part through the use and efforts of others.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under its
terms, with knowledge of his or her Copyright and Related Rights in the
Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data
in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation
thereof, including any amended or successor version of such
directive); and
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose whatsoever,
including without limitation commercial, advertising or promotional
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
member of the public at large and to the detriment of Affirmer's heirs and
successors, fully intending that such Waiver shall not be subject to
revocation, rescission, cancellation, termination, or any other legal or
equitable action to disrupt the quiet enjoyment of the Work by the public
as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non exclusive,
irrevocable and unconditional license to exercise Affirmer's Copyright and
Related Rights in the Work (i) in all territories worldwide, (ii) for the
maximum duration provided by applicable law or treaty (including future
time extensions), (iii) in any current or future medium and for any number
of copies, and (iv) for any purpose whatsoever, including without
limitation commercial, advertising or promotional purposes (the
"License"). The License shall be deemed effective as of the date CC0 was
applied by Affirmer to the Work. Should any part of the License for any
reason be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the remainder
of the License, and in such case Affirmer hereby affirms that he or she
will not (i) exercise any of his or her remaining Copyright and Related
Rights in the Work or (ii) assert any associated claims and causes of
action with respect to the Work, in either case contrary to Affirmer's
express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied,
statutory or otherwise, including without limitation warranties of
title, merchantability, fitness for a particular purpose, non
infringement, or the absence of latent or other defects, accuracy, or
the present or absence of errors, whether or not discoverable, all to
the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.

Wyświetl plik

@ -1,5 +0,0 @@
Copyright (C) 2024 by Ahmet Inan <xdsopl@gmail.com>
Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

34
Makefile 100644
Wyświetl plik

@ -0,0 +1,34 @@
CFLAGS = -g -DUP=0 -DDN=1 -D_GNU_SOURCE=1 -W -Wall -O3 -std=c99 -fno-math-errno -ffinite-math-only -fno-rounding-math -fno-signaling-nans -fno-trapping-math -fcx-limited-range -fsingle-precision-constant $(shell sdl-config --cflags)
LDLIBS = -lm -lasound $(shell sdl-config --libs)
all: encode decode debug
test: 8000.ppm 11025.ppm 16000.ppm 40000.ppm 44100.ppm 48000.ppm
fun: 8000.gnu 11025.gnu 16000.gnu 40000.gnu 44100.gnu 48000.gnu
clean:
rm -f encode decode debug *.o {8000,11025,16000,40000,44100,48000}.{ppm,wav,dat,gnu}
.PRECIOUS: %.wav %.dat %.ppm %.gnu
%.wav: encode
./encode smpte.ppm $@ $(basename $@)
%.dat: %.wav debug
./debug $< $(basename $<).ppm > $@
%.ppm: %.wav decode
./decode $< $@
%.gnu: %.dat
echo 'plot "$<" u 1:2 w l t "data", "$<" u 1:3 w l t "control", "$<" u 1:4 w l t "sync", "$<" u 1:5 w l t "leader", "$<" u 1:6 w l t "break", "$<" u 1:7 w l t "vis ss", "$<" u 1:8 w l t "vis lo", "$<" u 1:9 w l t "vis hi", "$<" u 1:10 w l t "even", "$<" u 1:11 w l t "odd"' > $@
encode: encode.o mmap_file.o pcm.o wav.o alsa.o yuv.o img.o ppm.o sdl.o
decode: decode.o mmap_file.o pcm.o wav.o alsa.o window.o ddc.o buffer.o yuv.o img.o ppm.o sdl.o
debug: debug.o mmap_file.o pcm.o wav.o alsa.o window.o ddc.o buffer.o yuv.o img.o ppm.o sdl.o

69
README 100644
Wyświetl plik

@ -0,0 +1,69 @@
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
robot36 is written from scratch just for the fun of it.
i have no ham radio and started this project out of curiosity how people are able to encode or decode FM in very good quality just using an DSP with DAC or ADC.
Theory of operation:
Robot 36 is one mode of many SSTV modes which transfers images using the luminance / chrominance information of the image.
Like with other SSTV modes, the information is send using FM and needs only 800hz bandwidth for data (1500hz-2300hz) and 200hz for control (1100hz-1300hz).
Robot 36 transfers 320x240 color images in around 36 seconds, hence the name Robot 36.
More information about Robot 36 mode and SSTV can be found on the Internet.
I suggest finding and reading the wonderful "Dayton Paper" of JL Barber (N7CXI).
encode:
Here we simply change the rate of an complex oscillator according to the Y, U and V values we get from the input image and only use the real part of the oscillator as output.
decode:
FM demodulation is not so easy. After many frustrating attempts to emulate hardware and playing around with zero cross detection and Phase-locked loop detectors i finally found a very nice way to do it:
Using Hilbert Transformation we get a complex valued function from a real valued function, which we differentiate in time using polar coordinates and getting the instantaneous frequency from the argument.
Doing Hilbert Transform in discrete space for this purpose is also know as Digital Down Conversion.
my DDC consists of an complex valued decimating ideal fir filter using Kaiser window at its input and an complex oscillator mixer at its output.
You can find a lot more about DDC's and FM demodulation on the Internet.
I Suggest finding and reading the enlightening "Virtual Radios" Paper of Vanu Bose, Michael Ismert, Matt Welborn, and John Guttag.
You should also look at GNU Radio: http://gnuradio.org/ and at the invaluable information at dspGuru: http://www.dspguru.com/
smpte.ppm is converted from http://en.wikipedia.org/wiki/File:SMPTE_Color_Bars.svg
compile everything:
# make
test encode and decode using smpte.ppm and various rates:
# make test
remove generated files:
# make clean
listen to default alsa device and write out ppm images with %F-%T.ppm as file name:
# ./decode
listen to alsa device plughw:0,0 and write out ppm images with %F-%T.ppm as file name:
# ./decode plughw:0,0
listen to default alsa device and show image in sdl window:
# ./decode default sdl:
read from wav file input.wav and write out to ppm image output.ppm:
# ./decode input.wav output.ppm
encode ppm image input.ppm to output.wav using rate of 40000Hz
# ./encode input.ppm output.wav 40000
encode ppm image input.ppm and write out to default alsa device
# ./encode input.ppm
last but not least, have fun with a debugging session:
# make fun
now ppm files have debugging pixels and show raw data like this:
https://sites.google.com/site/xdsopl/home/robot36_raw_image.png
you can look at the signal analysis using gnuplot:
# gnuplot
gnuplot> load "8000.gnu"
this should give you an output like this:
https://sites.google.com/site/xdsopl/home/robot36_signal_analysis.png

Wyświetl plik

@ -1,9 +0,0 @@
### Robot36: The Java Cut
This is not a drill!
Get ready to beam in a whole new Robot36, rebuilt from the ground up in pure Java! Just like that nostalgic reboot of your favorite childhood show (but hopefully better written!), things might be a little different this time around. There will be glitches, there will be bugs, but hey, that's the beauty of live transmission, right? Hold on to your spacesuits, folks, it's gonna be a bumpy ride!
Stay tuned for further transmissions...

223
alsa.c 100644
Wyświetl plik

@ -0,0 +1,223 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <stdio.h>
#include <stdlib.h>
#include <alsa/asoundlib.h>
#include "alsa.h"
struct alsa {
struct pcm base;
snd_pcm_t *pcm;
int index;
int frames;
int r;
int c;
};
void close_alsa(struct pcm *pcm)
{
struct alsa *alsa = (struct alsa *)(pcm->data);
snd_pcm_drain(alsa->pcm);
snd_pcm_close(alsa->pcm);
free(alsa);
}
void info_alsa(struct pcm *pcm)
{
struct alsa *alsa = (struct alsa *)(pcm->data);
if (alsa->frames)
fprintf(stderr, "%d channel(s), %d rate, %.2f seconds\n", alsa->c, alsa->r, (float)alsa->frames / (float)alsa->r);
else
fprintf(stderr, "%d channel(s), %d rate\n", alsa->c, alsa->r);
}
int rate_alsa(struct pcm *pcm)
{
struct alsa *alsa = (struct alsa *)(pcm->data);
return alsa->r;
}
int channels_alsa(struct pcm *pcm)
{
struct alsa *alsa = (struct alsa *)(pcm->data);
return alsa->c;
}
int read_alsa(struct pcm *pcm, short *buff, int frames)
{
struct alsa *alsa = (struct alsa *)(pcm->data);
int got = 0;
while (0 < frames) {
while ((got = snd_pcm_readi(alsa->pcm, buff, frames)) < 0)
if (snd_pcm_prepare(alsa->pcm) < 0)
return 0;
buff += got * alsa->c;
frames -= got;
}
return 1;
}
int write_alsa(struct pcm *pcm, short *buff, int frames)
{
struct alsa *alsa = (struct alsa *)(pcm->data);
if (alsa->frames && (alsa->index + frames) > alsa->frames)
return 0;
alsa->index += frames;
int got = 0;
while (0 < frames) {
while ((got = snd_pcm_writei(alsa->pcm, buff, frames)) < 0)
if (snd_pcm_prepare(alsa->pcm) < 0)
return 0;
buff += got * alsa->c;
frames -= got;
}
return 1;
}
int open_alsa_read(struct pcm **p, char *name)
{
snd_pcm_t *pcm;
if (snd_pcm_open(&pcm, name, SND_PCM_STREAM_CAPTURE, 0) < 0) {
fprintf(stderr, "Error opening PCM device %s\n", name);
return 0;
}
snd_pcm_hw_params_t *params;
snd_pcm_hw_params_alloca(&params);
if (snd_pcm_hw_params_any(pcm, params) < 0) {
fprintf(stderr, "Can not configure this PCM device.\n");
snd_pcm_close(pcm);
return 0;
}
if (snd_pcm_hw_params_set_access(pcm, params, SND_PCM_ACCESS_RW_INTERLEAVED) < 0) {
fprintf(stderr, "Error setting access.\n");
snd_pcm_close(pcm);
return 0;
}
if (snd_pcm_hw_params_set_format(pcm, params, SND_PCM_FORMAT_S16_LE) < 0) {
fprintf(stderr, "Error setting S16_LE format.\n");
snd_pcm_close(pcm);
return 0;
}
if (snd_pcm_hw_params_set_rate_resample(pcm, params, 0) < 0) {
fprintf(stderr, "Error disabling resampling.\n");
snd_pcm_close(pcm);
return 0;
}
unsigned rate_min = 8000;
int dir_min = 0;
if (snd_pcm_hw_params_set_rate_min(pcm, params, &rate_min, &dir_min) < 0 || rate_min < 8000) {
fprintf(stderr, "Error setting min rate.\n");
snd_pcm_close(pcm);
return 0;
}
if (snd_pcm_hw_params(pcm, params) < 0) {
fprintf(stderr, "Error setting HW params.\n");
snd_pcm_close(pcm);
return 0;
}
unsigned int rate = 0;
if (snd_pcm_hw_params_get_rate(params, &rate, 0) < 0) {
fprintf(stderr, "Error getting rate.\n");
snd_pcm_close(pcm);
return 0;
}
unsigned int channels = 0;
if (snd_pcm_hw_params_get_channels(params, &channels) < 0) {
fprintf(stderr, "Error getting channels.\n");
snd_pcm_close(pcm);
return 0;
}
struct alsa *alsa = (struct alsa *)malloc(sizeof(struct alsa));
alsa->base.close = close_alsa;
alsa->base.info = info_alsa;
alsa->base.rate = rate_alsa;
alsa->base.channels = channels_alsa;
alsa->base.rw = read_alsa;
alsa->base.data = (void *)alsa;
alsa->pcm = pcm;
alsa->r = rate;
alsa->c = channels;
alsa->frames = 0;
*p = &(alsa->base);
return 1;
}
int open_alsa_write(struct pcm **p, char *name, int rate, int channels, float seconds)
{
snd_pcm_t *pcm;
if (snd_pcm_open(&pcm, name, SND_PCM_STREAM_PLAYBACK, 0) < 0) {
fprintf(stderr, "Error opening PCM device %s\n", name);
return 0;
}
snd_pcm_hw_params_t *params;
snd_pcm_hw_params_alloca(&params);
if (snd_pcm_hw_params_any(pcm, params) < 0) {
fprintf(stderr, "Can not configure this PCM device.\n");
snd_pcm_close(pcm);
return 0;
}
if (snd_pcm_hw_params_set_access(pcm, params, SND_PCM_ACCESS_RW_INTERLEAVED) < 0) {
fprintf(stderr, "Error setting access.\n");
snd_pcm_close(pcm);
return 0;
}
if (snd_pcm_hw_params_set_format(pcm, params, SND_PCM_FORMAT_S16_LE) < 0) {
fprintf(stderr, "Error setting S16_LE format.\n");
snd_pcm_close(pcm);
return 0;
}
if (snd_pcm_hw_params_set_rate_resample(pcm, params, 0) < 0) {
fprintf(stderr, "Error disabling resampling.\n");
snd_pcm_close(pcm);
return 0;
}
if (snd_pcm_hw_params_set_rate_near(pcm, params, (unsigned int *)&rate, 0) < 0) {
fprintf(stderr, "Error setting rate.\n");
snd_pcm_close(pcm);
return 0;
}
if (snd_pcm_hw_params_set_channels_near(pcm, params, (unsigned int *)&channels) < 0) {
fprintf(stderr, "Error setting channels.\n");
snd_pcm_close(pcm);
return 0;
}
if (snd_pcm_hw_params(pcm, params) < 0) {
fprintf(stderr, "Error setting HW params.\n");
snd_pcm_close(pcm);
return 0;
}
struct alsa *alsa = (struct alsa *)malloc(sizeof(struct alsa));
alsa->base.close = close_alsa;
alsa->base.info = info_alsa;
alsa->base.rate = rate_alsa;
alsa->base.channels = channels_alsa;
alsa->base.rw = write_alsa;
alsa->base.data = (void *)alsa;
alsa->pcm = pcm;
alsa->r = rate;
alsa->c = channels;
alsa->frames = seconds * rate;
alsa->index = 0;
*p = &(alsa->base);
return 1;
}

15
alsa.h 100644
Wyświetl plik

@ -0,0 +1,15 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef ALSA_H
#define ALSA_H
#include "pcm.h"
int open_alsa_read(struct pcm **, char *);
int open_alsa_write(struct pcm **, char *, int, int, float);
#endif

1
app/.gitignore vendored
Wyświetl plik

@ -1 +0,0 @@
/build

Wyświetl plik

@ -1,44 +0,0 @@
plugins {
alias(libs.plugins.androidApplication)
}
android {
namespace 'xdsopl.robot36'
compileSdk 34
defaultConfig {
applicationId "xdsopl.robot36"
minSdk 24
targetSdk 34
versionCode 50
versionName "2.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildFeatures {
buildConfig true
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
}
dependencies {
implementation libs.appcompat
implementation libs.material
implementation libs.activity
implementation libs.constraintlayout
testImplementation libs.junit
androidTestImplementation libs.ext.junit
androidTestImplementation libs.espresso.core
}

Wyświetl plik

@ -1,21 +0,0 @@
# Add project specific ProGuard rules here.
# You can control the set of applied configuration files using the
# proguardFiles setting in build.gradle.
#
# For more details, see
# http://developer.android.com/guide/developing/tools/proguard.html
# If your project uses WebView with JS, uncomment the following
# and specify the fully qualified class name to the JavaScript interface
# class:
#-keepclassmembers class fqcn.of.javascript.interface.for.webview {
# public *;
#}
# Uncomment this to preserve the line number information for
# debugging stack traces.
#-keepattributes SourceFile,LineNumberTable
# If you keep the line number information, uncomment this to
# hide the original source file name.
#-renamesourcefileattribute SourceFile

Wyświetl plik

@ -1,26 +0,0 @@
package xdsopl.robot36;
import android.content.Context;
import androidx.test.platform.app.InstrumentationRegistry;
import androidx.test.ext.junit.runners.AndroidJUnit4;
import org.junit.Test;
import org.junit.runner.RunWith;
import static org.junit.Assert.*;
/**
* Instrumented test, which will execute on an Android device.
*
* @see <a href="http://d.android.com/tools/testing">Testing documentation</a>
*/
@RunWith(AndroidJUnit4.class)
public class ExampleInstrumentedTest {
@Test
public void useAppContext() {
// Context of the app under test.
Context appContext = InstrumentationRegistry.getInstrumentation().getTargetContext();
assertEquals("xdsopl.robot36", appContext.getPackageName());
}
}

Wyświetl plik

@ -1,29 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools">
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<application
android:allowBackup="true"
android:dataExtractionRules="@xml/data_extraction_rules"
android:fullBackupContent="@xml/backup_rules"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/Theme.Robot36"
tools:targetApi="31">
<activity
android:name=".MainActivity"
android:configChanges="orientation|screenSize|screenLayout|keyboardHidden"
android:exported="true"
android:launchMode="singleTask">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>

Wyświetl plik

@ -1,54 +0,0 @@
/*
Color converter
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public final class ColorConverter {
private static int clamp(int value) {
return Math.min(Math.max(value, 0), 255);
}
private static float clamp(float value) {
return Math.min(Math.max(value, 0), 1);
}
private static int float2int(float level) {
int intensity = Math.round(255 * level);
return clamp(intensity);
}
private static int compress(float level) {
float compressed = (float) Math.sqrt(clamp(level));
return float2int(compressed);
}
private static int YUV2RGB(int Y, int U, int V) {
Y -= 16;
U -= 128;
V -= 128;
int R = clamp((298 * Y + 409 * V + 128) >> 8);
int G = clamp((298 * Y - 100 * U - 208 * V + 128) >> 8);
int B = clamp((298 * Y + 516 * U + 128) >> 8);
return 0xff000000 | (R << 16) | (G << 8) | B;
}
public static int GRAY(float level) {
return 0xff000000 | 0x00010101 * compress(level);
}
public static int RGB(float red, float green, float blue) {
return 0xff000000 | (float2int(red) << 16) | (float2int(green) << 8) | float2int(blue);
}
public static int YUV2RGB(float Y, float U, float V) {
return YUV2RGB(float2int(Y), float2int(U), float2int(V));
}
public static int YUV2RGB(int YUV) {
return YUV2RGB((YUV & 0x00ff0000) >> 16, (YUV & 0x0000ff00) >> 8, YUV & 0x000000ff);
}
}

Wyświetl plik

@ -1,101 +0,0 @@
/*
Complex math
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
@SuppressWarnings("unused")
public class Complex {
public float real, imag;
Complex() {
real = 0;
imag = 0;
}
Complex(float real, float imag) {
this.real = real;
this.imag = imag;
}
Complex set(Complex other) {
real = other.real;
imag = other.imag;
return this;
}
@SuppressWarnings("SameParameterValue")
Complex set(float real, float imag) {
this.real = real;
this.imag = imag;
return this;
}
Complex set(float real) {
return set(real, 0);
}
float norm() {
return real * real + imag * imag;
}
float abs() {
return (float) Math.sqrt(norm());
}
float arg() {
return (float) Math.atan2(imag, real);
}
Complex polar(float a, float b) {
real = a * (float) Math.cos(b);
imag = a * (float) Math.sin(b);
return this;
}
Complex conj() {
imag = -imag;
return this;
}
Complex add(Complex other) {
real += other.real;
imag += other.imag;
return this;
}
Complex sub(Complex other) {
real -= other.real;
imag -= other.imag;
return this;
}
Complex mul(float value) {
real *= value;
imag *= value;
return this;
}
Complex mul(Complex other) {
float tmp = real * other.real - imag * other.imag;
imag = real * other.imag + imag * other.real;
real = tmp;
return this;
}
Complex div(float value) {
real /= value;
imag /= value;
return this;
}
Complex div(Complex other) {
float den = other.norm();
float tmp = (real * other.real + imag * other.imag) / den;
imag = (imag * other.real - real * other.imag) / den;
real = tmp;
return this;
}
}

Wyświetl plik

@ -1,41 +0,0 @@
/*
Complex Convolution
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class ComplexConvolution {
public final int length;
public final float[] taps;
private final float[] real;
private final float[] imag;
private final Complex sum;
private int pos;
ComplexConvolution(int length) {
this.length = length;
this.taps = new float[length];
this.real = new float[length];
this.imag = new float[length];
this.sum = new Complex();
this.pos = 0;
}
Complex push(Complex input) {
real[pos] = input.real;
imag[pos] = input.imag;
if (++pos >= length)
pos = 0;
sum.real = 0;
sum.imag = 0;
for (float tap : taps) {
sum.real += tap * real[pos];
sum.imag += tap * imag[pos];
if (++pos >= length)
pos = 0;
}
return sum;
}
}

Wyświetl plik

@ -1,255 +0,0 @@
/*
SSTV Decoder
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
import java.util.ArrayList;
import java.util.Arrays;
public class Decoder {
private final Demodulator demodulator;
private final PixelBuffer pixelBuffer;
private final PixelBuffer scopeBuffer;
private final float[] scanLineBuffer;
private final float[] scratchBuffer;
private final int[] last5msSyncPulses;
private final int[] last9msSyncPulses;
private final int[] last20msSyncPulses;
private final int[] last5msScanLines;
private final int[] last9msScanLines;
private final int[] last20msScanLines;
private final float[] last5msFrequencyOffsets;
private final float[] last9msFrequencyOffsets;
private final float[] last20msFrequencyOffsets;
private final int scanLineReserveSamples;
private final int syncPulseToleranceSamples;
private final int scanLineToleranceSamples;
private final Mode rawMode;
private final ArrayList<Mode> syncPulse5msModes;
private final ArrayList<Mode> syncPulse9msModes;
private final ArrayList<Mode> syncPulse20msModes;
public Mode lastMode;
private int curSample;
private int lastSyncPulseIndex;
private int lastScanLineSamples;
private float lastFrequencyOffset;
Decoder(PixelBuffer scopeBuffer, int sampleRate) {
this.scopeBuffer = scopeBuffer;
pixelBuffer = new PixelBuffer(scopeBuffer.width, 2);
demodulator = new Demodulator(sampleRate);
double scanLineMaxSeconds = 7;
int scanLineMaxSamples = (int) Math.round(scanLineMaxSeconds * sampleRate);
scanLineBuffer = new float[scanLineMaxSamples];
double scratchBufferSeconds = 1.1;
int scratchBufferSamples = (int) Math.round(scratchBufferSeconds * sampleRate);
scratchBuffer = new float[scratchBufferSamples];
int scanLineCount = 4;
last5msScanLines = new int[scanLineCount];
last9msScanLines = new int[scanLineCount];
last20msScanLines = new int[scanLineCount];
int syncPulseCount = scanLineCount + 1;
last5msSyncPulses = new int[syncPulseCount];
last9msSyncPulses = new int[syncPulseCount];
last20msSyncPulses = new int[syncPulseCount];
last5msFrequencyOffsets = new float[syncPulseCount];
last9msFrequencyOffsets = new float[syncPulseCount];
last20msFrequencyOffsets = new float[syncPulseCount];
double syncPulseToleranceSeconds = 0.03;
syncPulseToleranceSamples = (int) Math.round(syncPulseToleranceSeconds * sampleRate);
double scanLineToleranceSeconds = 0.001;
scanLineToleranceSamples = (int) Math.round(scanLineToleranceSeconds * sampleRate);
scanLineReserveSamples = sampleRate;
rawMode = new RawDecoder(sampleRate);
lastMode = rawMode;
lastScanLineSamples = (int) Math.round(0.150 * sampleRate);
syncPulse5msModes = new ArrayList<>();
syncPulse5msModes.add(RGBModes.Wraase_SC2_180(sampleRate));
syncPulse5msModes.add(RGBModes.Martin("1", 0.146432, sampleRate));
syncPulse5msModes.add(RGBModes.Martin("2", 0.073216, sampleRate));
syncPulse9msModes = new ArrayList<>();
syncPulse9msModes.add(new Robot_36_Color(sampleRate));
syncPulse9msModes.add(new Robot_72_Color(sampleRate));
syncPulse9msModes.add(RGBModes.Scottie("1", 0.138240, sampleRate));
syncPulse9msModes.add(RGBModes.Scottie("2", 0.088064, sampleRate));
syncPulse9msModes.add(RGBModes.Scottie("DX", 0.3456, sampleRate));
syncPulse20msModes = new ArrayList<>();
syncPulse20msModes.add(new PaulDon("50", 320, 0.09152, sampleRate));
syncPulse20msModes.add(new PaulDon("90", 320, 0.17024, sampleRate));
syncPulse20msModes.add(new PaulDon("120", 640, 0.1216, sampleRate));
syncPulse20msModes.add(new PaulDon("160", 512, 0.195584, sampleRate));
syncPulse20msModes.add(new PaulDon("180", 640, 0.18304, sampleRate));
syncPulse20msModes.add(new PaulDon("240", 640, 0.24448, sampleRate));
syncPulse20msModes.add(new PaulDon("290", 640, 0.2288, sampleRate));
}
private void adjustSyncPulses(int[] pulses, int shift) {
for (int i = 0; i < pulses.length; ++i)
pulses[i] -= shift;
}
private double scanLineMean(int[] lines) {
double mean = 0;
for (int diff : lines)
mean += diff;
mean /= lines.length;
return mean;
}
private double scanLineStdDev(int[] lines, double mean) {
double stdDev = 0;
for (int diff : lines)
stdDev += (diff - mean) * (diff - mean);
stdDev = Math.sqrt(stdDev / lines.length);
return stdDev;
}
private double frequencyOffsetMean(float[] offsets) {
double mean = 0;
for (float diff : offsets)
mean += diff;
mean /= offsets.length;
return mean;
}
private Mode detectMode(ArrayList<Mode> modes, int line) {
Mode bestMode = rawMode;
int bestDist = Integer.MAX_VALUE;
for (Mode mode : modes) {
int dist = Math.abs(line - mode.getScanLineSamples());
if (dist <= scanLineToleranceSamples && dist < bestDist) {
bestDist = dist;
bestMode = mode;
}
}
return bestMode;
}
private void copyUnscaled() {
for (int row = 0; row < pixelBuffer.height; ++row) {
int line = scopeBuffer.width * scopeBuffer.line;
System.arraycopy(pixelBuffer.pixels, row * pixelBuffer.width, scopeBuffer.pixels, line, pixelBuffer.width);
Arrays.fill(scopeBuffer.pixels, line + pixelBuffer.width, line + scopeBuffer.width, 0);
System.arraycopy(scopeBuffer.pixels, line, scopeBuffer.pixels, scopeBuffer.width * (scopeBuffer.line + scopeBuffer.height / 2), scopeBuffer.width);
scopeBuffer.line = (scopeBuffer.line + 1) % (scopeBuffer.height / 2);
}
}
private void copyScaled(int scale) {
for (int row = 0; row < pixelBuffer.height; ++row) {
int line = scopeBuffer.width * scopeBuffer.line;
for (int col = 0; col < pixelBuffer.width; ++col)
for (int i = 0; i < scale; ++i)
scopeBuffer.pixels[line + col * scale + i] = pixelBuffer.pixels[pixelBuffer.width * row + col];
Arrays.fill(scopeBuffer.pixels, line + pixelBuffer.width * scale, line + scopeBuffer.width, 0);
System.arraycopy(scopeBuffer.pixels, line, scopeBuffer.pixels, scopeBuffer.width * (scopeBuffer.line + scopeBuffer.height / 2), scopeBuffer.width);
scopeBuffer.line = (scopeBuffer.line + 1) % (scopeBuffer.height / 2);
for (int i = 0; i < scale; ++i) {
System.arraycopy(scopeBuffer.pixels, line, scopeBuffer.pixels, scopeBuffer.width * scopeBuffer.line, scopeBuffer.width);
System.arraycopy(scopeBuffer.pixels, line, scopeBuffer.pixels, scopeBuffer.width * (scopeBuffer.line + scopeBuffer.height / 2), scopeBuffer.width);
scopeBuffer.line = (scopeBuffer.line + 1) % (scopeBuffer.height / 2);
}
}
}
private void copyLines(boolean okay) {
if (!okay || pixelBuffer.width <= 0)
return;
int scale = scopeBuffer.width / pixelBuffer.width;
if (scale == 1)
copyUnscaled();
else
copyScaled(scale);
}
private boolean processSyncPulse(ArrayList<Mode> modes, float[] freqOffs, int[] pulses, int[] lines, int index) {
for (int i = 1; i < lines.length; ++i)
lines[i - 1] = lines[i];
lines[lines.length - 1] = index - pulses[pulses.length - 1];
for (int i = 1; i < pulses.length; ++i)
pulses[i - 1] = pulses[i];
pulses[pulses.length - 1] = index;
for (int i = 1; i < freqOffs.length; ++i)
freqOffs[i - 1] = freqOffs[i];
freqOffs[pulses.length - 1] = demodulator.frequencyOffset;
if (lines[0] == 0)
return false;
double mean = scanLineMean(lines);
int scanLineSamples = (int) Math.round(mean);
if (scanLineSamples > scratchBuffer.length)
return false;
if (scanLineStdDev(lines, mean) > scanLineToleranceSamples)
return false;
float frequencyOffset = (float) frequencyOffsetMean(freqOffs);
Mode mode = detectMode(modes, scanLineSamples);
boolean pictureChanged = lastMode != mode
|| Math.abs(lastScanLineSamples - scanLineSamples) > scanLineToleranceSamples
|| Math.abs(lastSyncPulseIndex + scanLineSamples - pulses[pulses.length - 1]) > syncPulseToleranceSamples;
pixelBuffer.width = scopeBuffer.width;
if (pulses[0] >= scanLineSamples && pictureChanged) {
int endPulse = pulses[0];
int extrapolate = endPulse / scanLineSamples;
int firstPulse = endPulse - extrapolate * scanLineSamples;
for (int pulseIndex = firstPulse; pulseIndex < endPulse; pulseIndex += scanLineSamples)
copyLines(mode.decodeScanLine(pixelBuffer, scratchBuffer, scanLineBuffer, pulseIndex, scanLineSamples, frequencyOffset));
}
for (int i = pictureChanged ? 0 : lines.length - 1; i < lines.length; ++i)
copyLines(mode.decodeScanLine(pixelBuffer, scratchBuffer, scanLineBuffer, pulses[i], lines[i], frequencyOffset));
int shift = pulses[pulses.length - 1] - scanLineReserveSamples;
if (shift > scanLineReserveSamples) {
adjustSyncPulses(last5msSyncPulses, shift);
adjustSyncPulses(last9msSyncPulses, shift);
adjustSyncPulses(last20msSyncPulses, shift);
int endSample = curSample;
curSample = 0;
for (int i = shift; i < endSample; ++i)
scanLineBuffer[curSample++] = scanLineBuffer[i];
}
lastMode = mode;
lastSyncPulseIndex = pulses[pulses.length - 1];
lastScanLineSamples = scanLineSamples;
lastFrequencyOffset = frequencyOffset;
return true;
}
public boolean process(float[] recordBuffer, int channelSelect) {
boolean syncPulseDetected = demodulator.process(recordBuffer, channelSelect);
int syncPulseIndex = curSample + demodulator.syncPulseOffset;
int channels = channelSelect > 0 ? 2 : 1;
for (int j = 0; j < recordBuffer.length / channels; ++j) {
scanLineBuffer[curSample++] = recordBuffer[j];
if (curSample >= scanLineBuffer.length) {
int shift = scanLineReserveSamples;
syncPulseIndex -= shift;
lastSyncPulseIndex -= shift;
adjustSyncPulses(last5msSyncPulses, shift);
adjustSyncPulses(last9msSyncPulses, shift);
adjustSyncPulses(last20msSyncPulses, shift);
curSample = 0;
for (int i = shift; i < scanLineBuffer.length; ++i)
scanLineBuffer[curSample++] = scanLineBuffer[i];
}
}
if (syncPulseDetected) {
switch (demodulator.syncPulseWidth) {
case FiveMilliSeconds:
return processSyncPulse(syncPulse5msModes, last5msFrequencyOffsets, last5msSyncPulses, last5msScanLines, syncPulseIndex);
case NineMilliSeconds:
return processSyncPulse(syncPulse9msModes, last9msFrequencyOffsets, last9msSyncPulses, last9msScanLines, syncPulseIndex);
case TwentyMilliSeconds:
return processSyncPulse(syncPulse20msModes, last20msFrequencyOffsets, last20msSyncPulses, last20msScanLines, syncPulseIndex);
}
} else if (lastSyncPulseIndex >= scanLineReserveSamples && curSample > lastSyncPulseIndex + (lastScanLineSamples * 5) / 4) {
pixelBuffer.width = scopeBuffer.width;
copyLines(lastMode.decodeScanLine(pixelBuffer, scratchBuffer, scanLineBuffer, lastSyncPulseIndex, lastScanLineSamples, lastFrequencyOffset));
lastSyncPulseIndex += lastScanLineSamples;
return true;
}
return false;
}
}

Wyświetl plik

@ -1,27 +0,0 @@
/*
Digital delay line
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class Delay {
public final int length;
private final float[] buf;
private int pos;
Delay(int length) {
this.length = length;
this.buf = new float[length];
this.pos = 0;
}
float push(float input) {
float tmp = buf[pos];
buf[pos] = input;
if (++pos >= length)
pos = 0;
return tmp;
}
}

Wyświetl plik

@ -1,124 +0,0 @@
/*
SSTV Demodulator
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class Demodulator {
private final SimpleMovingAverage syncPulseFilter;
private final ComplexConvolution baseBandLowPass;
private final FrequencyModulation frequencyModulation;
private final SchmittTrigger syncPulseTrigger;
private final Phasor baseBandOscillator;
private final Delay syncPulseValueDelay;
private final float syncPulseFrequencyValue;
private final float syncPulseFrequencyTolerance;
private final int syncPulse5msMinSamples;
private final int syncPulse5msMaxSamples;
private final int syncPulse9msMaxSamples;
private final int syncPulse20msMaxSamples;
private final int syncPulseFilterDelay;
private int syncPulseCounter;
private Complex baseBand;
public enum SyncPulseWidth {
FiveMilliSeconds,
NineMilliSeconds,
TwentyMilliSeconds
}
public SyncPulseWidth syncPulseWidth;
public int syncPulseOffset;
public float frequencyOffset;
Demodulator(int sampleRate) {
double blackFrequency = 1500;
double whiteFrequency = 2300;
double scanLineBandwidth = whiteFrequency - blackFrequency;
frequencyModulation = new FrequencyModulation(scanLineBandwidth, sampleRate);
double syncPulse5msSeconds = 0.005;
double syncPulse9msSeconds = 0.009;
double syncPulse20msSeconds = 0.020;
double syncPulse5msMinSeconds = syncPulse5msSeconds / 2;
double syncPulse5msMaxSeconds = (syncPulse5msSeconds + syncPulse9msSeconds) / 2;
double syncPulse9msMaxSeconds = (syncPulse9msSeconds + syncPulse20msSeconds) / 2;
double syncPulse20msMaxSeconds = syncPulse20msSeconds + syncPulse5msSeconds;
syncPulse5msMinSamples = (int) Math.round(syncPulse5msMinSeconds * sampleRate);
syncPulse5msMaxSamples = (int) Math.round(syncPulse5msMaxSeconds * sampleRate);
syncPulse9msMaxSamples = (int) Math.round(syncPulse9msMaxSeconds * sampleRate);
syncPulse20msMaxSamples = (int) Math.round(syncPulse20msMaxSeconds * sampleRate);
double syncPulseFilterSeconds = syncPulse5msSeconds / 2;
int syncPulseFilterSamples = (int) Math.round(syncPulseFilterSeconds * sampleRate) | 1;
syncPulseFilterDelay = (syncPulseFilterSamples - 1) / 2;
syncPulseFilter = new SimpleMovingAverage(syncPulseFilterSamples);
syncPulseValueDelay = new Delay(syncPulseFilterSamples);
double lowestFrequency = 1000;
double highestFrequency = 2800;
double cutoffFrequency = (highestFrequency - lowestFrequency) / 2;
double baseBandLowPassSeconds = 0.002;
int baseBandLowPassSamples = (int) Math.round(baseBandLowPassSeconds * sampleRate) | 1;
baseBandLowPass = new ComplexConvolution(baseBandLowPassSamples);
Kaiser kaiser = new Kaiser();
for (int i = 0; i < baseBandLowPass.length; ++i)
baseBandLowPass.taps[i] = (float) (kaiser.window(2.0, i, baseBandLowPass.length) * Filter.lowPass(cutoffFrequency, sampleRate, i, baseBandLowPass.length));
double centerFrequency = (lowestFrequency + highestFrequency) / 2;
baseBandOscillator = new Phasor(-centerFrequency, sampleRate);
double syncPulseFrequency = 1200;
syncPulseFrequencyValue = (float) ((syncPulseFrequency - centerFrequency) * 2 / scanLineBandwidth);
syncPulseFrequencyTolerance = (float) (50 * 2 / scanLineBandwidth);
double syncPorchFrequency = 1500;
double syncHighFrequency = (syncPulseFrequency + syncPorchFrequency) / 2;
double syncLowFrequency = (syncPulseFrequency + syncHighFrequency) / 2;
double syncLowValue = (syncLowFrequency - centerFrequency) * 2 / scanLineBandwidth;
double syncHighValue = (syncHighFrequency - centerFrequency) * 2 / scanLineBandwidth;
syncPulseTrigger = new SchmittTrigger((float) syncLowValue, (float) syncHighValue);
baseBand = new Complex();
}
public boolean process(float[] buffer, int channelSelect) {
boolean syncPulseDetected = false;
int channels = channelSelect > 0 ? 2 : 1;
for (int i = 0; i < buffer.length / channels; ++i) {
switch (channelSelect) {
case 1:
baseBand.set(buffer[2 * i]);
break;
case 2:
baseBand.set(buffer[2 * i + 1]);
break;
case 3:
baseBand.set(buffer[2 * i] + buffer[2 * i + 1]);
break;
case 4:
baseBand.set(buffer[2 * i], buffer[2 * i + 1]);
break;
default:
baseBand.set(buffer[i]);
}
baseBand = baseBandLowPass.push(baseBand.mul(baseBandOscillator.rotate()));
float frequencyValue = frequencyModulation.demod(baseBand);
float syncPulseValue = syncPulseFilter.avg(frequencyValue);
float syncPulseDelayedValue = syncPulseValueDelay.push(syncPulseValue);
buffer[i] = frequencyValue;
if (!syncPulseTrigger.latch(syncPulseValue)) {
++syncPulseCounter;
} else if (syncPulseCounter < syncPulse5msMinSamples || syncPulseCounter > syncPulse20msMaxSamples || Math.abs(syncPulseDelayedValue - syncPulseFrequencyValue) > syncPulseFrequencyTolerance) {
syncPulseCounter = 0;
} else {
if (syncPulseCounter < syncPulse5msMaxSamples)
syncPulseWidth = SyncPulseWidth.FiveMilliSeconds;
else if (syncPulseCounter < syncPulse9msMaxSamples)
syncPulseWidth = SyncPulseWidth.NineMilliSeconds;
else
syncPulseWidth = SyncPulseWidth.TwentyMilliSeconds;
syncPulseOffset = i - syncPulseFilterDelay;
frequencyOffset = syncPulseDelayedValue - syncPulseFrequencyValue;
syncPulseDetected = true;
syncPulseCounter = 0;
}
}
return syncPulseDetected;
}
}

Wyświetl plik

@ -1,42 +0,0 @@
/*
Exponential Moving Average
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
@SuppressWarnings("unused")
public class ExponentialMovingAverage {
private float alpha;
private float prev;
ExponentialMovingAverage() {
this.alpha = 1;
}
public float avg(float input) {
return prev = prev * (1 - alpha) + alpha * input;
}
public void alpha(double alpha) {
this.alpha = (float) alpha;
}
public void alpha(double alpha, int order) {
alpha(Math.pow(alpha, 1.0 / order));
}
public void cutoff(double freq, double rate, int order) {
double x = Math.cos(2 * Math.PI * freq / rate);
alpha(x - 1 + Math.sqrt(x * (x - 4) + 3), order);
}
public void cutoff(double freq, double rate) {
cutoff(freq, rate, 1);
}
public void reset() {
prev = 0;
}
}

Wyświetl plik

@ -1,22 +0,0 @@
/*
FIR Filter
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public final class Filter {
public static double sinc(double x) {
if (x == 0)
return 1;
x *= Math.PI;
return Math.sin(x) / x;
}
public static double lowPass(double cutoff, double rate, int n, int N) {
double f = 2 * cutoff / rate;
double x = n - (N - 1) / 2.0;
return f * sinc(f * x);
}
}

Wyświetl plik

@ -1,31 +0,0 @@
/*
Frequency Modulation
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class FrequencyModulation {
private float prev;
private final float scale;
private final float Pi, TwoPi;
FrequencyModulation(double bandwidth, double sampleRate) {
this.Pi = (float) Math.PI;
this.TwoPi = 2 * this.Pi;
this.scale = (float) (sampleRate / (bandwidth * Math.PI));
}
private float wrap(float value) {
if (value < -Pi)
return value + TwoPi;
if (value > Pi)
return value - TwoPi;
return value;
}
float demod(Complex input) {
float phase = input.arg();
float delta = wrap(phase - prev);
prev = phase;
return scale * delta;
}
}

Wyświetl plik

@ -1,46 +0,0 @@
/*
Kaiser window
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
import java.util.Arrays;
public class Kaiser {
double[] summands;
Kaiser() {
// i0(x) converges for x inside -3*Pi:3*Pi in less than 35 iterations
summands = new double[35];
}
private double square(double value) {
return value * value;
}
/*
i0() implements the zero-th order modified Bessel function of the first kind:
https://en.wikipedia.org/wiki/Bessel_function#Modified_Bessel_functions:_I%CE%B1,_K%CE%B1
$I_\alpha(x) = i^{-\alpha} J_\alpha(ix) = \sum_{m=0}^\infty \frac{1}{m!\, \Gamma(m+\alpha+1)}\left(\frac{x}{2}\right)^{2m+\alpha}$
$I_0(x) = J_0(ix) = \sum_{m=0}^\infty \frac{1}{m!\, \Gamma(m+1)}\left(\frac{x}{2}\right)^{2m} = \sum_{m=0}^\infty \left(\frac{x^m}{2^m\,m!}\right)^{2}$
We obviously can't use the factorial here, so let's get rid of it:
$= 1 + \left(\frac{x}{2 \cdot 1}\right)^2 + \left(\frac{x}{2 \cdot 1}\cdot \frac{x}{2 \cdot 2}\right)^2 + \left(\frac{x}{2 \cdot 1}\cdot \frac{x}{2 \cdot 2}\cdot \frac{x}{2 \cdot 3}\right)^2 + .. = 1 + \sum_{m=1}^\infty \left(\prod_{n=1}^m \frac
*/
private double i0(double x) {
summands[0] = 1;
double val = 1;
for (int n = 1; n < summands.length; ++n)
summands[n] = square(val *= x / (2 * n));
Arrays.sort(summands);
double sum = 0;
for (int n = summands.length - 1; n >= 0; --n)
sum += summands[n];
return sum;
}
public double window(double a, int n, int N) {
return i0(Math.PI * a * Math.sqrt(1 - square((2.0 * n) / (N - 1) - 1))) / i0(Math.PI * a);
}
}

Wyświetl plik

@ -1,402 +0,0 @@
/*
Robot36
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
import android.Manifest;
import android.content.Context;
import android.content.SharedPreferences;
import android.content.pm.PackageManager;
import android.graphics.Bitmap;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.MediaRecorder;
import android.os.Bundle;
import android.view.Menu;
import android.view.MenuItem;
import android.widget.ImageView;
import androidx.activity.EdgeToEdge;
import androidx.annotation.NonNull;
import androidx.appcompat.app.AlertDialog;
import androidx.appcompat.app.AppCompatActivity;
import androidx.appcompat.app.AppCompatDelegate;
import androidx.core.app.ActivityCompat;
import androidx.core.content.ContextCompat;
import androidx.core.graphics.Insets;
import androidx.core.view.ViewCompat;
import androidx.core.view.WindowInsetsCompat;
import java.util.ArrayList;
import java.util.List;
public class MainActivity extends AppCompatActivity {
private Bitmap scopeBitmap;
private PixelBuffer scopeBuffer;
private ImageView scopeView;
private float[] recordBuffer;
private AudioRecord audioRecord;
private Decoder decoder;
private Menu menu;
private int recordRate;
private int recordChannel;
private int audioSource;
private void setStatus(int id) {
setTitle(id);
}
private void setStatus(String str) {
setTitle(str);
}
private final AudioRecord.OnRecordPositionUpdateListener recordListener = new AudioRecord.OnRecordPositionUpdateListener() {
@Override
public void onMarkerReached(AudioRecord ignore) {
}
@Override
public void onPeriodicNotification(AudioRecord audioRecord) {
audioRecord.read(recordBuffer, 0, recordBuffer.length, AudioRecord.READ_BLOCKING);
if (decoder.process(recordBuffer, recordChannel)) {
scopeBitmap.setPixels(scopeBuffer.pixels, scopeBuffer.width * scopeBuffer.line, scopeBuffer.width, 0, 0, scopeBuffer.width, scopeBuffer.height / 2);
scopeView.invalidate();
setStatus(decoder.lastMode.getName());
}
}
};
private void initAudioRecord() {
boolean rateChanged = true;
if (audioRecord != null) {
rateChanged = audioRecord.getSampleRate() != recordRate;
boolean channelChanged = audioRecord.getChannelCount() != (recordChannel == 0 ? 1 : 2);
boolean sourceChanged = audioRecord.getAudioSource() != audioSource;
if (!rateChanged && !channelChanged && !sourceChanged)
return;
stopListening();
audioRecord.release();
audioRecord = null;
}
int channelConfig = AudioFormat.CHANNEL_IN_MONO;
int channelCount = 1;
if (recordChannel != 0) {
channelCount = 2;
channelConfig = AudioFormat.CHANNEL_IN_STEREO;
}
int sampleSize = 4;
int frameSize = sampleSize * channelCount;
int audioFormat = AudioFormat.ENCODING_PCM_FLOAT;
int readsPerSecond = 50;
int bufferSize = Integer.highestOneBit(recordRate) * frameSize;
int frameCount = recordRate / readsPerSecond;
recordBuffer = new float[frameCount * channelCount];
try {
audioRecord = new AudioRecord(audioSource, recordRate, channelConfig, audioFormat, bufferSize);
if (audioRecord.getState() == AudioRecord.STATE_INITIALIZED) {
audioRecord.setRecordPositionUpdateListener(recordListener);
audioRecord.setPositionNotificationPeriod(frameCount);
if (rateChanged)
decoder = new Decoder(scopeBuffer, recordRate);
startListening();
} else {
setStatus(R.string.audio_init_failed);
}
} catch (IllegalArgumentException e) {
setStatus(R.string.audio_setup_failed);
} catch (SecurityException e) {
setStatus(R.string.audio_permission_denied);
}
}
private void startListening() {
if (audioRecord != null) {
audioRecord.startRecording();
if (audioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
audioRecord.read(recordBuffer, 0, recordBuffer.length, AudioRecord.READ_BLOCKING);
setStatus(R.string.listening);
} else {
setStatus(R.string.audio_recording_error);
}
}
}
private void stopListening() {
if (audioRecord != null)
audioRecord.stop();
}
private void setRecordRate(int newSampleRate) {
if (recordRate == newSampleRate)
return;
recordRate = newSampleRate;
updateRecordRateMenu();
initAudioRecord();
}
private void setRecordChannel(int newChannelSelect) {
if (recordChannel == newChannelSelect)
return;
recordChannel = newChannelSelect;
updateRecordChannelMenu();
initAudioRecord();
}
private void setAudioSource(int newAudioSource) {
if (audioSource == newAudioSource)
return;
audioSource = newAudioSource;
updateAudioSourceMenu();
initAudioRecord();
}
private void updateRecordRateMenu() {
switch (recordRate) {
case 8000:
menu.findItem(R.id.action_set_record_rate_8000).setChecked(true);
break;
case 16000:
menu.findItem(R.id.action_set_record_rate_16000).setChecked(true);
break;
case 32000:
menu.findItem(R.id.action_set_record_rate_32000).setChecked(true);
break;
case 44100:
menu.findItem(R.id.action_set_record_rate_44100).setChecked(true);
break;
case 48000:
menu.findItem(R.id.action_set_record_rate_48000).setChecked(true);
break;
}
}
private void updateRecordChannelMenu() {
switch (recordChannel) {
case 0:
menu.findItem(R.id.action_set_record_channel_default).setChecked(true);
break;
case 1:
menu.findItem(R.id.action_set_record_channel_first).setChecked(true);
break;
case 2:
menu.findItem(R.id.action_set_record_channel_second).setChecked(true);
break;
case 3:
menu.findItem(R.id.action_set_record_channel_summation).setChecked(true);
break;
case 4:
menu.findItem(R.id.action_set_record_channel_analytic).setChecked(true);
break;
}
}
private void updateAudioSourceMenu() {
switch (audioSource) {
case MediaRecorder.AudioSource.DEFAULT:
menu.findItem(R.id.action_set_source_default).setChecked(true);
break;
case MediaRecorder.AudioSource.MIC:
menu.findItem(R.id.action_set_source_microphone).setChecked(true);
break;
case MediaRecorder.AudioSource.CAMCORDER:
menu.findItem(R.id.action_set_source_camcorder).setChecked(true);
break;
case MediaRecorder.AudioSource.VOICE_RECOGNITION:
menu.findItem(R.id.action_set_source_voice_recognition).setChecked(true);
break;
case MediaRecorder.AudioSource.UNPROCESSED:
menu.findItem(R.id.action_set_source_unprocessed).setChecked(true);
break;
}
}
private final int permissionID = 1;
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode != permissionID)
return;
for (int i = 0; i < permissions.length; ++i)
if (permissions[i].equals(Manifest.permission.RECORD_AUDIO) && grantResults[i] == PackageManager.PERMISSION_GRANTED)
initAudioRecord();
}
@Override
protected void onSaveInstanceState(@NonNull Bundle state) {
state.putInt("nightMode", AppCompatDelegate.getDefaultNightMode());
state.putInt("recordRate", recordRate);
state.putInt("recordChannel", recordChannel);
state.putInt("audioSource", audioSource);
super.onSaveInstanceState(state);
}
private void storeSettings() {
SharedPreferences pref = getPreferences(Context.MODE_PRIVATE);
SharedPreferences.Editor edit = pref.edit();
edit.putInt("nightMode", AppCompatDelegate.getDefaultNightMode());
edit.putInt("recordRate", recordRate);
edit.putInt("recordChannel", recordChannel);
edit.putInt("audioSource", audioSource);
edit.apply();
}
@Override
protected void onCreate(Bundle state) {
final int defaultSampleRate = 8000;
final int defaultChannelSelect = 0;
final int defaultAudioSource = MediaRecorder.AudioSource.DEFAULT;
if (state == null) {
SharedPreferences pref = getPreferences(Context.MODE_PRIVATE);
AppCompatDelegate.setDefaultNightMode(pref.getInt("nightMode", AppCompatDelegate.getDefaultNightMode()));
recordRate = pref.getInt("recordRate", defaultSampleRate);
recordChannel = pref.getInt("recordChannel", defaultChannelSelect);
audioSource = pref.getInt("audioSource", defaultAudioSource);
} else {
AppCompatDelegate.setDefaultNightMode(state.getInt("nightMode", AppCompatDelegate.getDefaultNightMode()));
recordRate = state.getInt("recordRate", defaultSampleRate);
recordChannel = state.getInt("recordChannel", defaultChannelSelect);
audioSource = state.getInt("audioSource", defaultAudioSource);
}
super.onCreate(state);
EdgeToEdge.enable(this);
setContentView(R.layout.activity_main);
ViewCompat.setOnApplyWindowInsetsListener(findViewById(R.id.main), (v, insets) -> {
Insets systemBars = insets.getInsets(WindowInsetsCompat.Type.systemBars());
v.setPadding(systemBars.left, systemBars.top, systemBars.right, systemBars.bottom);
return insets;
});
scopeView = findViewById(R.id.scope);
int scopeWidth = 640;
int scopeHeight = 1280;
scopeBitmap = Bitmap.createBitmap(scopeWidth, scopeHeight, Bitmap.Config.ARGB_8888);
scopeView.setImageBitmap(scopeBitmap);
scopeBuffer = new PixelBuffer(scopeWidth, 2 * scopeHeight);
List<String> permissions = new ArrayList<>();
if (ContextCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED) {
permissions.add(Manifest.permission.RECORD_AUDIO);
setStatus(R.string.audio_permission_denied);
} else {
initAudioRecord();
}
if (!permissions.isEmpty())
ActivityCompat.requestPermissions(this, permissions.toArray(new String[0]), permissionID);
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.menu_main, menu);
this.menu = menu;
updateRecordRateMenu();
updateRecordChannelMenu();
updateAudioSourceMenu();
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
int id = item.getItemId();
if (id == R.id.action_set_record_rate_8000) {
setRecordRate(8000);
return true;
}
if (id == R.id.action_set_record_rate_16000) {
setRecordRate(16000);
return true;
}
if (id == R.id.action_set_record_rate_32000) {
setRecordRate(32000);
return true;
}
if (id == R.id.action_set_record_rate_44100) {
setRecordRate(44100);
return true;
}
if (id == R.id.action_set_record_rate_48000) {
setRecordRate(48000);
return true;
}
if (id == R.id.action_set_record_channel_default) {
setRecordChannel(0);
return true;
}
if (id == R.id.action_set_record_channel_first) {
setRecordChannel(1);
return true;
}
if (id == R.id.action_set_record_channel_second) {
setRecordChannel(2);
return true;
}
if (id == R.id.action_set_record_channel_summation) {
setRecordChannel(3);
return true;
}
if (id == R.id.action_set_record_channel_analytic) {
setRecordChannel(4);
return true;
}
if (id == R.id.action_set_source_default) {
setAudioSource(MediaRecorder.AudioSource.DEFAULT);
return true;
}
if (id == R.id.action_set_source_microphone) {
setAudioSource(MediaRecorder.AudioSource.MIC);
return true;
}
if (id == R.id.action_set_source_camcorder) {
setAudioSource(MediaRecorder.AudioSource.CAMCORDER);
return true;
}
if (id == R.id.action_set_source_voice_recognition) {
setAudioSource(MediaRecorder.AudioSource.VOICE_RECOGNITION);
return true;
}
if (id == R.id.action_set_source_unprocessed) {
setAudioSource(MediaRecorder.AudioSource.UNPROCESSED);
return true;
}
if (id == R.id.action_enable_night_mode) {
AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_YES);
return true;
}
if (id == R.id.action_disable_night_mode) {
AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_NO);
return true;
}
if (id == R.id.action_privacy_policy) {
showTextPage(getString(R.string.privacy_policy), getString(R.string.privacy_policy_text));
return true;
}
if (id == R.id.action_about) {
showTextPage(getString(R.string.about), getString(R.string.about_text, BuildConfig.VERSION_NAME));
return true;
}
return super.onOptionsItemSelected(item);
}
private void showTextPage(String title, String message) {
AlertDialog.Builder builder = new AlertDialog.Builder(this, R.style.Theme_AlertDialog);
builder.setNeutralButton(R.string.close, null);
builder.setTitle(title);
builder.setMessage(message);
builder.show();
}
@Override
protected void onResume() {
startListening();
super.onResume();
}
@Override
protected void onPause() {
stopListening();
storeSettings();
super.onPause();
}
}

Wyświetl plik

@ -1,15 +0,0 @@
/*
Mode interface
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public interface Mode {
String getName();
int getScanLineSamples();
boolean decodeScanLine(PixelBuffer pixelBuffer, float[] scratchBuffer, float[] scanLineBuffer, int syncPulseIndex, int scanLineSamples, float frequencyOffset);
}

Wyświetl plik

@ -1,85 +0,0 @@
/*
PD modes
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class PaulDon implements Mode {
private final ExponentialMovingAverage lowPassFilter;
private final int horizontalPixels;
private final int scanLineSamples;
private final int channelSamples;
private final int beginSamples;
private final int yEvenBeginSamples;
private final int vAvgBeginSamples;
private final int uAvgBeginSamples;
private final int yOddBeginSamples;
private final int endSamples;
private final String name;
@SuppressWarnings("UnnecessaryLocalVariable")
PaulDon(String name, int horizontalPixels, double channelSeconds, int sampleRate) {
this.name = "PD " + name;
this.horizontalPixels = horizontalPixels;
double syncPulseSeconds = 0.02;
double syncPorchSeconds = 0.00208;
double scanLineSeconds = syncPulseSeconds + syncPorchSeconds + 4 * (channelSeconds);
scanLineSamples = (int) Math.round(scanLineSeconds * sampleRate);
channelSamples = (int) Math.round(channelSeconds * sampleRate);
double yEvenBeginSeconds = syncPorchSeconds;
yEvenBeginSamples = (int) Math.round(yEvenBeginSeconds * sampleRate);
beginSamples = yEvenBeginSamples;
double vAvgBeginSeconds = yEvenBeginSeconds + channelSeconds;
vAvgBeginSamples = (int) Math.round(vAvgBeginSeconds * sampleRate);
double uAvgBeginSeconds = vAvgBeginSeconds + channelSeconds;
uAvgBeginSamples = (int) Math.round(uAvgBeginSeconds * sampleRate);
double yOddBeginSeconds = uAvgBeginSeconds + channelSeconds;
yOddBeginSamples = (int) Math.round(yOddBeginSeconds * sampleRate);
double yOddEndSeconds = yOddBeginSeconds + channelSeconds;
endSamples = (int) Math.round(yOddEndSeconds * sampleRate);
lowPassFilter = new ExponentialMovingAverage();
}
private float freqToLevel(float frequency, float offset) {
return 0.5f * (frequency - offset + 1.f);
}
@Override
public String getName() {
return name;
}
@Override
public int getScanLineSamples() {
return scanLineSamples;
}
@Override
public boolean decodeScanLine(PixelBuffer pixelBuffer, float[] scratchBuffer, float[] scanLineBuffer, int syncPulseIndex, int scanLineSamples, float frequencyOffset) {
if (syncPulseIndex + beginSamples < 0 || syncPulseIndex + endSamples > scanLineBuffer.length)
return false;
lowPassFilter.cutoff(horizontalPixels, 2 * channelSamples, 2);
lowPassFilter.reset();
for (int i = beginSamples; i < endSamples; ++i)
scratchBuffer[i] = lowPassFilter.avg(scanLineBuffer[syncPulseIndex + i]);
lowPassFilter.reset();
for (int i = endSamples - 1; i >= beginSamples; --i)
scratchBuffer[i] = freqToLevel(lowPassFilter.avg(scratchBuffer[i]), frequencyOffset);
for (int i = 0; i < horizontalPixels; ++i) {
int position = (i * channelSamples) / horizontalPixels;
int yEvenPos = position + yEvenBeginSamples;
int vAvgPos = position + vAvgBeginSamples;
int uAvgPos = position + uAvgBeginSamples;
int yOddPos = position + yOddBeginSamples;
pixelBuffer.pixels[i] =
ColorConverter.YUV2RGB(scratchBuffer[yEvenPos], scratchBuffer[uAvgPos], scratchBuffer[vAvgPos]);
pixelBuffer.pixels[i + horizontalPixels] =
ColorConverter.YUV2RGB(scratchBuffer[yOddPos], scratchBuffer[uAvgPos], scratchBuffer[vAvgPos]);
}
pixelBuffer.width = horizontalPixels;
pixelBuffer.height = 2;
return true;
}
}

Wyświetl plik

@ -1,20 +0,0 @@
/*
Numerically controlled oscillator
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class Phasor {
private final Complex value;
private final Complex delta;
Phasor(double freq, double rate) {
value = new Complex(1, 0);
double omega = 2 * Math.PI * freq / rate;
delta = new Complex((float) Math.cos(omega), (float) Math.sin(omega));
}
Complex rotate() {
return value.div(value.mul(delta).abs());
}
}

Wyświetl plik

@ -1,21 +0,0 @@
/*
Pixel buffer
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class PixelBuffer {
public int[] pixels;
public int width;
public int height;
public int line;
PixelBuffer(int width, int height) {
this.width = width;
this.height = height;
this.line = 0;
this.pixels = new int[width * height];
}
}

Wyświetl plik

@ -1,73 +0,0 @@
/*
Decoder for RGB modes
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class RGBDecoder implements Mode {
private final ExponentialMovingAverage lowPassFilter;
private final int horizontalPixels;
private final int scanLineSamples;
private final int beginSamples;
private final int redBeginSamples;
private final int redSamples;
private final int greenBeginSamples;
private final int greenSamples;
private final int blueBeginSamples;
private final int blueSamples;
private final int endSamples;
private final String name;
RGBDecoder(String name, int horizontalPixels, double scanLineSeconds, double beginSeconds, double redBeginSeconds, double redEndSeconds, double greenBeginSeconds, double greenEndSeconds, double blueBeginSeconds, double blueEndSeconds, double endSeconds, int sampleRate) {
this.name = name;
this.horizontalPixels = horizontalPixels;
scanLineSamples = (int) Math.round(scanLineSeconds * sampleRate);
beginSamples = (int) Math.round(beginSeconds * sampleRate);
redBeginSamples = (int) Math.round(redBeginSeconds * sampleRate) - beginSamples;
redSamples = (int) Math.round((redEndSeconds - redBeginSeconds) * sampleRate);
greenBeginSamples = (int) Math.round(greenBeginSeconds * sampleRate) - beginSamples;
greenSamples = (int) Math.round((greenEndSeconds - greenBeginSeconds) * sampleRate);
blueBeginSamples = (int) Math.round(blueBeginSeconds * sampleRate) - beginSamples;
blueSamples = (int) Math.round((blueEndSeconds - blueBeginSeconds) * sampleRate);
endSamples = (int) Math.round(endSeconds * sampleRate);
lowPassFilter = new ExponentialMovingAverage();
}
private float freqToLevel(float frequency, float offset) {
return 0.5f * (frequency - offset + 1.f);
}
@Override
public String getName() {
return name;
}
@Override
public int getScanLineSamples() {
return scanLineSamples;
}
@Override
public boolean decodeScanLine(PixelBuffer pixelBuffer, float[] scratchBuffer, float[] scanLineBuffer, int syncPulseIndex, int scanLineSamples, float frequencyOffset) {
if (syncPulseIndex + beginSamples < 0 || syncPulseIndex + endSamples > scanLineBuffer.length)
return false;
lowPassFilter.cutoff(horizontalPixels, 2 * greenSamples, 2);
lowPassFilter.reset();
for (int i = 0; i < endSamples - beginSamples; ++i)
scratchBuffer[i] = lowPassFilter.avg(scanLineBuffer[syncPulseIndex + beginSamples + i]);
lowPassFilter.reset();
for (int i = endSamples - beginSamples - 1; i >= 0; --i)
scratchBuffer[i] = freqToLevel(lowPassFilter.avg(scratchBuffer[i]), frequencyOffset);
for (int i = 0; i < horizontalPixels; ++i) {
int redPos = redBeginSamples + (i * redSamples) / horizontalPixels;
int greenPos = greenBeginSamples + (i * greenSamples) / horizontalPixels;
int bluePos = blueBeginSamples + (i * blueSamples) / horizontalPixels;
pixelBuffer.pixels[i] = ColorConverter.RGB(scratchBuffer[redPos], scratchBuffer[greenPos], scratchBuffer[bluePos]);
}
pixelBuffer.width = horizontalPixels;
pixelBuffer.height = 1;
return true;
}
}

Wyświetl plik

@ -1,51 +0,0 @@
/*
RGB modes
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
@SuppressWarnings("UnnecessaryLocalVariable")
public final class RGBModes {
public static RGBDecoder Martin(String name, double channelSeconds, int sampleRate) {
double syncPulseSeconds = 0.004862;
double separatorSeconds = 0.000572;
double scanLineSeconds = syncPulseSeconds + separatorSeconds + 3 * (channelSeconds + separatorSeconds);
double greenBeginSeconds = separatorSeconds;
double greenEndSeconds = greenBeginSeconds + channelSeconds;
double blueBeginSeconds = greenEndSeconds + separatorSeconds;
double blueEndSeconds = blueBeginSeconds + channelSeconds;
double redBeginSeconds = blueEndSeconds + separatorSeconds;
double redEndSeconds = redBeginSeconds + channelSeconds;
return new RGBDecoder("Martin " + name, 320, scanLineSeconds, greenBeginSeconds, redBeginSeconds, redEndSeconds, greenBeginSeconds, greenEndSeconds, blueBeginSeconds, blueEndSeconds, redEndSeconds, sampleRate);
}
public static RGBDecoder Scottie(String name, double channelSeconds, int sampleRate) {
double syncPulseSeconds = 0.009;
double separatorSeconds = 0.0015;
double scanLineSeconds = syncPulseSeconds + 3 * (channelSeconds + separatorSeconds);
double blueEndSeconds = -syncPulseSeconds;
double blueBeginSeconds = blueEndSeconds - channelSeconds;
double greenEndSeconds = blueBeginSeconds - separatorSeconds;
double greenBeginSeconds = greenEndSeconds - channelSeconds;
double redBeginSeconds = separatorSeconds;
double redEndSeconds = redBeginSeconds + channelSeconds;
return new RGBDecoder("Scottie " + name, 320, scanLineSeconds, greenBeginSeconds, redBeginSeconds, redEndSeconds, greenBeginSeconds, greenEndSeconds, blueBeginSeconds, blueEndSeconds, redEndSeconds, sampleRate);
}
public static RGBDecoder Wraase_SC2_180(int sampleRate) {
double syncPulseSeconds = 0.0055225;
double syncPorchSeconds = 0.0005;
double channelSeconds = 0.235;
double scanLineSeconds = syncPulseSeconds + syncPorchSeconds + 3 * channelSeconds;
double redBeginSeconds = syncPorchSeconds;
double redEndSeconds = redBeginSeconds + channelSeconds;
double greenBeginSeconds = redEndSeconds;
double greenEndSeconds = greenBeginSeconds + channelSeconds;
double blueBeginSeconds = greenEndSeconds;
double blueEndSeconds = blueBeginSeconds + channelSeconds;
return new RGBDecoder("Wraase SC2-180", 320, scanLineSeconds, redBeginSeconds, redBeginSeconds, redEndSeconds, greenBeginSeconds, greenEndSeconds, blueBeginSeconds, blueEndSeconds, blueEndSeconds, sampleRate);
}
}

Wyświetl plik

@ -1,58 +0,0 @@
/*
Raw decoder
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class RawDecoder implements Mode {
private final ExponentialMovingAverage lowPassFilter;
private final int smallPictureMaxSamples;
private final int mediumPictureMaxSamples;
RawDecoder(int sampleRate) {
smallPictureMaxSamples = (int) Math.round(0.125 * sampleRate);
mediumPictureMaxSamples = (int) Math.round(0.175 * sampleRate);
lowPassFilter = new ExponentialMovingAverage();
}
private float freqToLevel(float frequency, float offset) {
return 0.5f * (frequency - offset + 1.f);
}
@Override
public String getName() {
return "Raw";
}
@Override
public int getScanLineSamples() {
return -1;
}
@Override
public boolean decodeScanLine(PixelBuffer pixelBuffer, float[] scratchBuffer, float[] scanLineBuffer, int syncPulseIndex, int scanLineSamples, float frequencyOffset) {
if (syncPulseIndex < 0 || syncPulseIndex + scanLineSamples > scanLineBuffer.length)
return false;
int horizontalPixels = pixelBuffer.width;
if (scanLineSamples < smallPictureMaxSamples)
horizontalPixels /= 2;
if (scanLineSamples < mediumPictureMaxSamples)
horizontalPixels /= 2;
lowPassFilter.cutoff(horizontalPixels, 2 * scanLineSamples, 2);
lowPassFilter.reset();
for (int i = 0; i < scanLineSamples; ++i)
scratchBuffer[i] = lowPassFilter.avg(scanLineBuffer[syncPulseIndex + i]);
lowPassFilter.reset();
for (int i = scanLineSamples - 1; i >= 0; --i)
scratchBuffer[i] = freqToLevel(lowPassFilter.avg(scratchBuffer[i]), frequencyOffset);
for (int i = 0; i < horizontalPixels; ++i) {
int position = (i * scanLineSamples) / horizontalPixels;
pixelBuffer.pixels[i] = ColorConverter.GRAY(scratchBuffer[position]);
}
pixelBuffer.width = horizontalPixels;
pixelBuffer.height = 1;
return true;
}
}

Wyświetl plik

@ -1,102 +0,0 @@
/*
Robot 36 Color
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class Robot_36_Color implements Mode {
private final ExponentialMovingAverage lowPassFilter;
private final int horizontalPixels;
private final int scanLineSamples;
private final int luminanceSamples;
private final int separatorSamples;
private final int chrominanceSamples;
private final int beginSamples;
private final int luminanceBeginSamples;
private final int separatorBeginSamples;
private final int chrominanceBeginSamples;
private final int endSamples;
private boolean lastEven;
@SuppressWarnings("UnnecessaryLocalVariable")
Robot_36_Color(int sampleRate) {
horizontalPixels = 320;
double syncPulseSeconds = 0.009;
double syncPorchSeconds = 0.003;
double luminanceSeconds = 0.088;
double separatorSeconds = 0.0045;
double porchSeconds = 0.0015;
double chrominanceSeconds = 0.044;
double scanLineSeconds = syncPulseSeconds + syncPorchSeconds + luminanceSeconds + separatorSeconds + porchSeconds + chrominanceSeconds;
scanLineSamples = (int) Math.round(scanLineSeconds * sampleRate);
luminanceSamples = (int) Math.round(luminanceSeconds * sampleRate);
separatorSamples = (int) Math.round(separatorSeconds * sampleRate);
chrominanceSamples = (int) Math.round(chrominanceSeconds * sampleRate);
double luminanceBeginSeconds = syncPorchSeconds;
luminanceBeginSamples = (int) Math.round(luminanceBeginSeconds * sampleRate);
beginSamples = luminanceBeginSamples;
double separatorBeginSeconds = luminanceBeginSeconds + luminanceSeconds;
separatorBeginSamples = (int) Math.round(separatorBeginSeconds * sampleRate);
double separatorEndSeconds = separatorBeginSeconds + separatorSeconds;
double chrominanceBeginSeconds = separatorEndSeconds + porchSeconds;
chrominanceBeginSamples = (int) Math.round(chrominanceBeginSeconds * sampleRate);
double chrominanceEndSeconds = chrominanceBeginSeconds + chrominanceSeconds;
endSamples = (int) Math.round(chrominanceEndSeconds * sampleRate);
lowPassFilter = new ExponentialMovingAverage();
}
private float freqToLevel(float frequency, float offset) {
return 0.5f * (frequency - offset + 1.f);
}
@Override
public String getName() {
return "Robot 36 Color";
}
@Override
public int getScanLineSamples() {
return scanLineSamples;
}
@Override
public boolean decodeScanLine(PixelBuffer pixelBuffer, float[] scratchBuffer, float[] scanLineBuffer, int syncPulseIndex, int scanLineSamples, float frequencyOffset) {
if (syncPulseIndex + beginSamples < 0 || syncPulseIndex + endSamples > scanLineBuffer.length)
return false;
float separator = 0;
for (int i = 0; i < separatorSamples; ++i)
separator += scanLineBuffer[syncPulseIndex + separatorBeginSamples + i];
separator /= separatorSamples;
separator -= frequencyOffset;
boolean even = separator < 0;
if (separator < -1.1 || separator > -0.9 && separator < 0.9 || separator > 1.1)
even = !lastEven;
lastEven = even;
lowPassFilter.cutoff(horizontalPixels, 2 * luminanceSamples, 2);
lowPassFilter.reset();
for (int i = beginSamples; i < endSamples; ++i)
scratchBuffer[i] = lowPassFilter.avg(scanLineBuffer[syncPulseIndex + i]);
lowPassFilter.reset();
for (int i = endSamples - 1; i >= beginSamples; --i)
scratchBuffer[i] = freqToLevel(lowPassFilter.avg(scratchBuffer[i]), frequencyOffset);
for (int i = 0; i < horizontalPixels; ++i) {
int luminancePos = luminanceBeginSamples + (i * luminanceSamples) / horizontalPixels;
int chrominancePos = chrominanceBeginSamples + (i * chrominanceSamples) / horizontalPixels;
if (even) {
pixelBuffer.pixels[i] = ColorConverter.RGB(scratchBuffer[luminancePos], 0, scratchBuffer[chrominancePos]);
} else {
int evenYUV = pixelBuffer.pixels[i];
int oddYUV = ColorConverter.RGB(scratchBuffer[luminancePos], scratchBuffer[chrominancePos], 0);
pixelBuffer.pixels[i] =
ColorConverter.YUV2RGB((evenYUV & 0x00ff00ff) | (oddYUV & 0x0000ff00));
pixelBuffer.pixels[i + horizontalPixels] =
ColorConverter.YUV2RGB((oddYUV & 0x00ffff00) | (evenYUV & 0x000000ff));
}
}
pixelBuffer.width = horizontalPixels;
pixelBuffer.height = 2;
return !even;
}
}

Wyświetl plik

@ -1,83 +0,0 @@
/*
Robot 72 Color
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class Robot_72_Color implements Mode {
private final ExponentialMovingAverage lowPassFilter;
private final int horizontalPixels;
private final int scanLineSamples;
private final int luminanceSamples;
private final int chrominanceSamples;
private final int beginSamples;
private final int yBeginSamples;
private final int vBeginSamples;
private final int uBeginSamples;
private final int endSamples;
@SuppressWarnings("UnnecessaryLocalVariable")
Robot_72_Color(int sampleRate) {
horizontalPixels = 320;
double syncPulseSeconds = 0.009;
double syncPorchSeconds = 0.003;
double luminanceSeconds = 0.138;
double separatorSeconds = 0.0045;
double porchSeconds = 0.0015;
double chrominanceSeconds = 0.069;
double scanLineSeconds = syncPulseSeconds + syncPorchSeconds + luminanceSeconds + 2 * (separatorSeconds + porchSeconds + chrominanceSeconds);
scanLineSamples = (int) Math.round(scanLineSeconds * sampleRate);
luminanceSamples = (int) Math.round(luminanceSeconds * sampleRate);
chrominanceSamples = (int) Math.round(chrominanceSeconds * sampleRate);
double yBeginSeconds = syncPorchSeconds;
yBeginSamples = (int) Math.round(yBeginSeconds * sampleRate);
beginSamples = yBeginSamples;
double yEndSeconds = yBeginSeconds + luminanceSeconds;
double vBeginSeconds = yEndSeconds + separatorSeconds + porchSeconds;
vBeginSamples = (int) Math.round(vBeginSeconds * sampleRate);
double vEndSeconds = vBeginSeconds + chrominanceSeconds;
double uBeginSeconds = vEndSeconds + separatorSeconds + porchSeconds;
uBeginSamples = (int) Math.round(uBeginSeconds * sampleRate);
double uEndSeconds = uBeginSeconds + chrominanceSeconds;
endSamples = (int) Math.round(uEndSeconds * sampleRate);
lowPassFilter = new ExponentialMovingAverage();
}
private float freqToLevel(float frequency, float offset) {
return 0.5f * (frequency - offset + 1.f);
}
@Override
public String getName() {
return "Robot 72 Color";
}
@Override
public int getScanLineSamples() {
return scanLineSamples;
}
@Override
public boolean decodeScanLine(PixelBuffer pixelBuffer, float[] scratchBuffer, float[] scanLineBuffer, int syncPulseIndex, int scanLineSamples, float frequencyOffset) {
if (syncPulseIndex + beginSamples < 0 || syncPulseIndex + endSamples > scanLineBuffer.length)
return false;
lowPassFilter.cutoff(horizontalPixels, 2 * luminanceSamples, 2);
lowPassFilter.reset();
for (int i = beginSamples; i < endSamples; ++i)
scratchBuffer[i] = lowPassFilter.avg(scanLineBuffer[syncPulseIndex + i]);
lowPassFilter.reset();
for (int i = endSamples - 1; i >= beginSamples; --i)
scratchBuffer[i] = freqToLevel(lowPassFilter.avg(scratchBuffer[i]), frequencyOffset);
for (int i = 0; i < horizontalPixels; ++i) {
int yPos = yBeginSamples + (i * luminanceSamples) / horizontalPixels;
int uPos = uBeginSamples + (i * chrominanceSamples) / horizontalPixels;
int vPos = vBeginSamples + (i * chrominanceSamples) / horizontalPixels;
pixelBuffer.pixels[i] = ColorConverter.YUV2RGB(scratchBuffer[yPos], scratchBuffer[uPos], scratchBuffer[vPos]);
}
pixelBuffer.width = horizontalPixels;
pixelBuffer.height = 1;
return true;
}
}

Wyświetl plik

@ -1,28 +0,0 @@
/*
Schmitt Trigger
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class SchmittTrigger {
private final float low, high;
private boolean previous;
SchmittTrigger(float low, float high) {
this.low = low;
this.high = high;
}
boolean latch(float input) {
if (previous) {
if (input < low)
previous = false;
} else {
if (input > high)
previous = true;
}
return previous;
}
}

Wyświetl plik

@ -1,17 +0,0 @@
/*
Simple Moving Average
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class SimpleMovingAverage extends SimpleMovingSum {
public SimpleMovingAverage(int length) {
super(length);
}
public float avg(float input) {
return sum(input) / length;
}
}

Wyświetl plik

@ -1,37 +0,0 @@
/*
Simple Moving Sum
Copyright 2024 Ahmet Inan <xdsopl@gmail.com>
*/
package xdsopl.robot36;
public class SimpleMovingSum {
private final float[] tree;
private int leaf;
public final int length;
public SimpleMovingSum(int length) {
this.length = length;
this.tree = new float[2 * length];
this.leaf = length;
}
public void add(float input) {
tree[leaf] = input;
for (int child = leaf, parent = leaf / 2; parent > 0; child = parent, parent /= 2)
tree[parent] = tree[child] + tree[child ^ 1];
if (++leaf >= tree.length)
leaf = length;
}
public float sum() {
return tree[1];
}
public float sum(float input) {
add(input);
return sum();
}
}

Wyświetl plik

@ -1,170 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<vector xmlns:android="http://schemas.android.com/apk/res/android"
android:width="108dp"
android:height="108dp"
android:viewportWidth="108"
android:viewportHeight="108">
<path
android:fillColor="#3DDC84"
android:pathData="M0,0h108v108h-108z" />
<path
android:fillColor="#00000000"
android:pathData="M9,0L9,108"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M19,0L19,108"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M29,0L29,108"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M39,0L39,108"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M49,0L49,108"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M59,0L59,108"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M69,0L69,108"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M79,0L79,108"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M89,0L89,108"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M99,0L99,108"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M0,9L108,9"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M0,19L108,19"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M0,29L108,29"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M0,39L108,39"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M0,49L108,49"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M0,59L108,59"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M0,69L108,69"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M0,79L108,79"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M0,89L108,89"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M0,99L108,99"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M19,29L89,29"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M19,39L89,39"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M19,49L89,49"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M19,59L89,59"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M19,69L89,69"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M19,79L89,79"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M29,19L29,89"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M39,19L39,89"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M49,19L49,89"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M59,19L59,89"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M69,19L69,89"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
<path
android:fillColor="#00000000"
android:pathData="M79,19L79,89"
android:strokeWidth="0.8"
android:strokeColor="#33FFFFFF" />
</vector>

Wyświetl plik

@ -1,30 +0,0 @@
<vector xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:aapt="http://schemas.android.com/aapt"
android:width="108dp"
android:height="108dp"
android:viewportWidth="108"
android:viewportHeight="108">
<path android:pathData="M31,63.928c0,0 6.4,-11 12.1,-13.1c7.2,-2.6 26,-1.4 26,-1.4l38.1,38.1L107,108.928l-32,-1L31,63.928z">
<aapt:attr name="android:fillColor">
<gradient
android:endX="85.84757"
android:endY="92.4963"
android:startX="42.9492"
android:startY="49.59793"
android:type="linear">
<item
android:color="#44000000"
android:offset="0.0" />
<item
android:color="#00000000"
android:offset="1.0" />
</gradient>
</aapt:attr>
</path>
<path
android:fillColor="#FFFFFF"
android:fillType="nonZero"
android:pathData="M65.3,45.828l3.8,-6.6c0.2,-0.4 0.1,-0.9 -0.3,-1.1c-0.4,-0.2 -0.9,-0.1 -1.1,0.3l-3.9,6.7c-6.3,-2.8 -13.4,-2.8 -19.7,0l-3.9,-6.7c-0.2,-0.4 -0.7,-0.5 -1.1,-0.3C38.8,38.328 38.7,38.828 38.9,39.228l3.8,6.6C36.2,49.428 31.7,56.028 31,63.928h46C76.3,56.028 71.8,49.428 65.3,45.828zM43.4,57.328c-0.8,0 -1.5,-0.5 -1.8,-1.2c-0.3,-0.7 -0.1,-1.5 0.4,-2.1c0.5,-0.5 1.4,-0.7 2.1,-0.4c0.7,0.3 1.2,1 1.2,1.8C45.3,56.528 44.5,57.328 43.4,57.328L43.4,57.328zM64.6,57.328c-0.8,0 -1.5,-0.5 -1.8,-1.2s-0.1,-1.5 0.4,-2.1c0.5,-0.5 1.4,-0.7 2.1,-0.4c0.7,0.3 1.2,1 1.2,1.8C66.5,56.528 65.6,57.328 64.6,57.328L64.6,57.328z"
android:strokeWidth="1"
android:strokeColor="#00000000" />
</vector>

Wyświetl plik

@ -1,22 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/main"
android:keepScreenOn="true"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<ImageView
android:id="@+id/scope"
android:layout_width="0dp"
android:layout_height="0dp"
android:contentDescription="@string/scope_description"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>

Wyświetl plik

@ -1,88 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
tools:context="xdsopl.robot36.MainActivity">
<item android:title="@string/audio_settings">
<menu>
<item android:title="@string/sample_rate">
<menu>
<group android:checkableBehavior="single">
<item
android:id="@+id/action_set_record_rate_8000"
android:title="@string/rate_8000" />
<item
android:id="@+id/action_set_record_rate_16000"
android:title="@string/rate_16000" />
<item
android:id="@+id/action_set_record_rate_32000"
android:title="@string/rate_32000" />
<item
android:id="@+id/action_set_record_rate_44100"
android:title="@string/rate_44100" />
<item
android:id="@+id/action_set_record_rate_48000"
android:title="@string/rate_48000" />
</group>
</menu>
</item>
<item android:title="@string/channel_select">
<menu>
<group android:checkableBehavior="single">
<item
android:id="@+id/action_set_record_channel_default"
android:title="@string/channel_default" />
<item
android:id="@+id/action_set_record_channel_first"
android:title="@string/channel_first" />
<item
android:id="@+id/action_set_record_channel_second"
android:title="@string/channel_second" />
<item
android:id="@+id/action_set_record_channel_summation"
android:title="@string/channel_summation" />
<item
android:id="@+id/action_set_record_channel_analytic"
android:title="@string/channel_analytic" />
</group>
</menu>
</item>
<item android:title="@string/audio_source">
<menu>
<group android:checkableBehavior="single">
<item
android:id="@+id/action_set_source_default"
android:title="@string/source_default" />
<item
android:id="@+id/action_set_source_microphone"
android:title="@string/source_microphone" />
<item
android:id="@+id/action_set_source_camcorder"
android:title="@string/source_camcorder" />
<item
android:id="@+id/action_set_source_voice_recognition"
android:title="@string/source_voice_recognition" />
<item
android:id="@+id/action_set_source_unprocessed"
android:title="@string/source_unprocessed" />
</group>
</menu>
</item>
</menu>
</item>
<item android:title="@string/night_mode">
<menu>
<item
android:id="@+id/action_enable_night_mode"
android:title="@string/enable" />
<item
android:id="@+id/action_disable_night_mode"
android:title="@string/disable" />
</menu>
</item>
<item
android:id="@+id/action_privacy_policy"
android:title="@string/privacy_policy" />
<item
android:id="@+id/action_about"
android:title="@string/about" />
</menu>

Wyświetl plik

@ -1,5 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android">
<background android:drawable="@color/ic_launcher_background"/>
<foreground android:drawable="@mipmap/ic_launcher_foreground"/>
</adaptive-icon>

Wyświetl plik

@ -1,5 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<adaptive-icon xmlns:android="http://schemas.android.com/apk/res/android">
<background android:drawable="@color/ic_launcher_background"/>
<foreground android:drawable="@mipmap/ic_launcher_foreground"/>
</adaptive-icon>

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 3.7 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 2.4 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 3.7 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 2.4 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 2.2 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 2.4 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 5.6 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 3.7 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 5.6 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 7.1 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 6.2 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 7.1 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 9.9 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 8.0 KiB

Plik binarny nie jest wyświetlany.

Przed

Szerokość:  |  Wysokość:  |  Rozmiar: 9.9 KiB

Wyświetl plik

@ -1,9 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="black">#FF000000</color>
<color name="dark">#FF010101</color>
<color name="gray">#FFB4B4B4</color>
<color name="thin">#FF2F2F2F</color>
<color name="white">#FFFFFFFF</color>
<color name="tint">#FFFFFFFF</color>
</resources>

Wyświetl plik

@ -1,28 +0,0 @@
<resources>
<!-- Base application theme. -->
<style name="Base.Theme.Robot36" parent="Theme.Material3.DayNight">
<!-- Primary brand color. -->
<item name="colorPrimary">@color/black</item>
<item name="colorPrimaryVariant">@color/gray</item>
<item name="colorOnPrimary">@color/white</item>
<!-- Secondary brand color. -->
<item name="colorSecondary">@color/gray</item>
<item name="colorSecondaryVariant">@color/black</item>
<item name="colorOnSecondary">@color/white</item>
<!-- Status bar color. -->
<item name="android:statusBarColor">@color/black</item>
<!-- Customize your theme here. -->
<item name="android:navigationBarColor">@color/dark</item>
</style>
<style name="Theme.AlertDialog" parent="Theme.Material3.DayNight.Dialog.Alert">
<!-- Primary brand color. -->
<item name="colorPrimary">@color/white</item>
<item name="colorPrimaryVariant">@color/gray</item>
<item name="colorOnPrimary">@color/white</item>
<!-- Secondary brand color. -->
<item name="colorSecondary">@color/gray</item>
<item name="colorSecondaryVariant">@color/black</item>
<item name="colorOnSecondary">@color/white</item>
</style>
</resources>

Wyświetl plik

@ -1,9 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="black">#FF000000</color>
<color name="dark">#FF010101</color>
<color name="gray">#FF696969</color>
<color name="thin">#FFE0E0E0</color>
<color name="white">#FFFFFFFF</color>
<color name="tint">#FF000000</color>
</resources>

Wyświetl plik

@ -1,4 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<resources>
<color name="ic_launcher_background">#FFFFFF</color>
</resources>

Wyświetl plik

@ -1,45 +0,0 @@
<resources>
<string name="app_name">Robot36</string>
<string name="listening">Listening</string>
<string name="audio_settings">Audio Settings</string>
<string name="sample_rate">Sample Rate</string>
<string name="rate_8000">8 kHz</string>
<string name="rate_16000">16 kHz</string>
<string name="rate_32000">32 kHz</string>
<string name="rate_44100">44.1 kHz</string>
<string name="rate_48000">48 kHz</string>
<string name="channel_select">Channel Select</string>
<string name="channel_default">Default</string>
<string name="channel_first">First</string>
<string name="channel_second">Second</string>
<string name="channel_summation">Summation</string>
<string name="channel_analytic">Analytic</string>
<string name="audio_source">Audio Source</string>
<string name="source_default">Default</string>
<string name="source_microphone">Microphone</string>
<string name="source_camcorder">Camcorder</string>
<string name="source_voice_recognition">Voice Recognition</string>
<string name="source_unprocessed">Unprocessed</string>
<string name="audio_init_failed">Audio init failed</string>
<string name="audio_setup_failed">Audio setup failed</string>
<string name="audio_permission_denied">Audio permission denied</string>
<string name="audio_recording_error">Audio recording error</string>
<string name="scope_description">Visualization of audio signal</string>
<string name="night_mode">Night Mode</string>
<string name="enable">Enable</string>
<string name="disable">Disable</string>
<string name="close">Close</string>
<string name="privacy_policy">Privacy Policy</string>
<string name="privacy_policy_text">To be able to decode SSTV encoded images the app needs access to the microphone.
Having access to the microphone is considered to be a sensitive permission and you have the right to know what the app does with that access:
The data recorded from the microphone is only used to fed the SSTV decoder, VU meter and the spectrum analyzer for visualization of its frequency content.
The app uses a very small temporary buffer in volatile memory and constantly overwrites this buffer with new data from the microphone.
The resulting images from the SSTV decoder is the only data that gets stored in persistent storage on your Android device.</string>
<string name="about">About Robot36</string>
<string name="about_text">Robot36 %1$s\nCopyright 2024 Ahmet Inan
\n\nPlease read DISCLAIMER at the bottom of this page.
\n\nRobot36 decodes SSTV encoded audio signals to images.
\n\nImplementation:\nhttps://github.com/xdsopl/robot36\nBSD Zero Clause License
\n\nMode specifications:\nhttp://www.barberdsp.com/downloads/Dayton%%20Paper.pdf\nby JL Barber - 2000
\n\nDISCLAIMER:\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.</string>
</resources>

Wyświetl plik

@ -1,30 +0,0 @@
<resources>
<!-- Base application theme. -->
<style name="Base.Theme.Robot36" parent="Theme.Material3.DayNight">
<!-- Primary brand color. -->
<item name="colorPrimary">@color/white</item>
<item name="colorPrimaryVariant">@color/gray</item>
<item name="colorOnPrimary">@color/black</item>
<!-- Secondary brand color. -->
<item name="colorSecondary">@color/gray</item>
<item name="colorSecondaryVariant">@color/white</item>
<item name="colorOnSecondary">@color/black</item>
<!-- Status bar color. -->
<item name="android:statusBarColor">@color/black</item>
<!-- Customize your theme here. -->
<item name="android:navigationBarColor">@color/dark</item>
</style>
<style name="Theme.AlertDialog" parent="Theme.Material3.DayNight.Dialog.Alert">
<!-- Primary brand color. -->
<item name="colorPrimary">@color/black</item>
<item name="colorPrimaryVariant">@color/gray</item>
<item name="colorOnPrimary">@color/black</item>
<!-- Secondary brand color. -->
<item name="colorSecondary">@color/gray</item>
<item name="colorSecondaryVariant">@color/white</item>
<item name="colorOnSecondary">@color/black</item>
</style>
<style name="Theme.Robot36" parent="Base.Theme.Robot36" />
</resources>

Wyświetl plik

@ -1,13 +0,0 @@
<?xml version="1.0" encoding="utf-8"?><!--
Sample backup rules file; uncomment and customize as necessary.
See https://developer.android.com/guide/topics/data/autobackup
for details.
Note: This file is ignored for devices older that API 31
See https://developer.android.com/about/versions/12/backup-restore
-->
<full-backup-content>
<!--
<include domain="sharedpref" path="."/>
<exclude domain="sharedpref" path="device.xml"/>
-->
</full-backup-content>

Wyświetl plik

@ -1,19 +0,0 @@
<?xml version="1.0" encoding="utf-8"?><!--
Sample data extraction rules file; uncomment and customize as necessary.
See https://developer.android.com/about/versions/12/backup-restore#xml-changes
for details.
-->
<data-extraction-rules>
<cloud-backup>
<!-- TODO: Use <include> and <exclude> to control what is backed up.
<include .../>
<exclude .../>
-->
</cloud-backup>
<!--
<device-transfer>
<include .../>
<exclude .../>
</device-transfer>
-->
</data-extraction-rules>

Wyświetl plik

@ -1,17 +0,0 @@
package xdsopl.robot36;
import org.junit.Test;
import static org.junit.Assert.*;
/**
* Example local unit test, which will execute on the development machine (host).
*
* @see <a href="http://d.android.com/tools/testing">Testing documentation</a>
*/
public class ExampleUnitTest {
@Test
public void addition_isCorrect() {
assertEquals(4, 2 + 2);
}
}

38
buffer.c 100644
Wyświetl plik

@ -0,0 +1,38 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include "buffer.h"
#include <stdlib.h>
float *do_buffer(struct buffer *d, float input)
{
d->s[d->last0] = input;
d->s[d->last1] = input;
int last = d->last0 < d->last1 ? d->last0 : d->last1;
d->last0 = (d->last0 - 1) < 0 ? d->len - 1 : d->last0 - 1;
d->last1 = (d->last1 - 1) < 0 ? d->len - 1 : d->last1 - 1;
return d->s + last;
}
struct buffer *alloc_buffer(int samples)
{
int len = 2 * samples;
struct buffer *d = malloc(sizeof(struct buffer));
d->s = malloc(sizeof(float) * len);
d->last0 = 0;
d->last1 = samples;
d->len = len;
for (int i = 0; i < len; i++)
d->s[i] = 0.0;
return d;
}
void free_buffer(struct buffer *buffer)
{
free(buffer->s);
free(buffer);
}

21
buffer.h 100644
Wyświetl plik

@ -0,0 +1,21 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef BUFFER_H
#define BUFFER_H
struct buffer {
float *s;
int last0;
int last1;
int len;
};
float *do_buffer(struct buffer *d, float input);
struct buffer *alloc_buffer(int samples);
void free_buffer(struct buffer *buffer);
#endif

Wyświetl plik

@ -1,4 +0,0 @@
// Top-level build file where you can add configuration options common to all sub-projects/modules.
plugins {
alias(libs.plugins.androidApplication) apply false
}

74
ddc.c 100644
Wyświetl plik

@ -0,0 +1,74 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <complex.h>
#include <math.h>
#include <stdlib.h>
#include "window.h"
#include "ddc.h"
void do_ddc(struct ddc *ddc, float *input, complex float *output)
{
int N = ddc->N;
int M = ddc->M;
int L = ddc->L;
int offset = 0;
for (int k = 0; k < L; k++) {
float *x = input + (((L * M - 1) - offset) / L);
complex float *b = ddc->b + (N * (offset % L));
offset += M;
complex float sum = 0.0;
for (int i = 0; i < N; i++)
sum += b[i] * x[i];
output[k] = ddc->osc * sum;
ddc->osc *= ddc->d;
ddc->osc /= cabsf(ddc->osc);
}
}
struct ddc *alloc_ddc(int L, int M, float carrier, float bw, float rate, int taps, float (*window)(float, float, float), float a)
{
float lstep = 1.0 / ((float)L * rate);
float ostep = (float)M * lstep;
struct ddc *ddc = malloc(sizeof(struct ddc));
ddc->N = (taps + L - 1) / L;
ddc->b = malloc(sizeof(complex float) * ddc->N * L);
ddc->osc = I;
ddc->d = cexpf(-I * 2.0 * M_PI * carrier * ostep);
ddc->L = L;
ddc->M = M;
complex float *b = malloc(sizeof(complex float) * taps);
float sum = 0.0;
for (int i = 0; i < taps; i++) {
float N = taps;
float n = i;
float x = n - (N - 1.0) / 2.0;
float l = 2.0 * bw * lstep;
float h = l * sinc(l * x);
float w = window(n, N, a);
float t = w * h;
sum += t;
complex float o = cexpf(I * 2.0 * M_PI * carrier * lstep * n);
b[i] = t * o * (float)L;
}
for (int i = 0; i < taps; i++)
b[i] /= sum;
for (int i = 0; i < ddc->N * L; i++)
ddc->b[i] = 0.0;
for (int i = 0; i < L; i++)
for (int j = i, k = 0; j < taps; j += L, k++)
ddc->b[i * ddc->N + k] = b[j];
free(b);
return ddc;
}
void free_ddc(struct ddc *ddc)
{
free(ddc->b);
free(ddc);
}

26
ddc.h 100644
Wyświetl plik

@ -0,0 +1,26 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef DDC_H
#define DDC_H
#include "window.h"
struct ddc {
complex float *b;
complex float osc;
complex float d;
int N;
int L;
int M;
};
void do_ddc(struct ddc *ddc, float *input, complex float *output);
struct ddc *alloc_ddc(int L, int M, float carrier, float bw, float rate, int taps, float (*window)(float, float, float), float a);
void free_ddc(struct ddc *ddc);
#endif

442
debug.c 100644
Wyświetl plik

@ -0,0 +1,442 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <complex.h>
#include <time.h>
#include "mmap_file.h"
#include "pcm.h"
#include "ddc.h"
#include "buffer.h"
#include "yuv.h"
#include "utils.h"
#include "img.h"
int main(int argc, char **argv)
{
struct pcm *pcm;
char *pcm_name = "default";
char *img_name = 0;
if (argc != 1)
pcm_name = argv[1];
if (argc == 3)
img_name = argv[2];
if (!open_pcm_read(&pcm, pcm_name)) {
fprintf(stderr, "couldnt open %s\n", pcm_name);
return 1;
}
info_pcm(pcm);
float rate = rate_pcm(pcm);
if (rate * 0.088 < 320.0) {
fprintf(stderr, "%.0fhz samplerate too low\n", rate);
return 1;
}
int channels = channels_pcm(pcm);
if (channels > 1)
fprintf(stderr, "using first of %d channels\n", channels);
float complex cnt_last = -I;
float complex dat_last = -I;
float dat_avg = 1900.0;
int begin_vis_ss = 0;
int begin_vis_lo = 0;
int begin_vis_hi = 0;
int begin_hor_sync = 0;
int begin_cal_break = 0;
int begin_cal_leader = 0;
int latch_sync = 0;
int cal_ticks = 0;
int got_cal_break = 0;
int vis_mode = 0;
int dat_mode = 0;
int vis_ticks = 0;
int vis_bit = -1;
int vis_byte = 0;
int sep_evn = 0;
int sep_odd = 0;
int y = 0;
int odd = 0;
int first_hor_sync = 0;
#if DN && UP
// 320 / 0.088 = 160 / 0.044 = 40000 / 11 = 3636.(36)~ pixels per second for Y, U and V
int64_t factor_L = 40000;
int64_t factor_M = 11 * rate;
int64_t factor_D = gcd(factor_L, factor_M);
factor_L /= factor_D;
factor_M /= factor_D;
#endif
#if DN && !UP
int64_t factor_L = 1;
// factor_M * step should be smaller than pixel length
int64_t factor_M = rate * 0.088 / 320.0 / 2;
#endif
#if !DN
int64_t factor_L = 1;
int64_t factor_M = 1;
#endif
// we want odd number of taps, 4 and 2 ms window length gives best results
int cnt_taps = 1 | (int)(rate * factor_L * 0.004);
int dat_taps = 1 | (int)(rate * factor_L * 0.002);
fprintf(stderr, "using %d and %d tap filter\n", cnt_taps, dat_taps);
float drate = rate * (float)factor_L / (float)factor_M;
float dstep = 1.0 / drate;
fprintf(stderr, "using factor of %ld/%ld, working at %.2fhz\n", factor_L, factor_M, drate);
float complex *cnt_q = malloc(sizeof(float complex) * factor_L);
float complex *dat_q = malloc(sizeof(float complex) * factor_L);
// same factor to keep life simple and have accurate horizontal sync
struct ddc *cnt_ddc = alloc_ddc(factor_L, factor_M, 1200.0, 200.0, rate, cnt_taps, kaiser, 2.0);
struct ddc *dat_ddc = alloc_ddc(factor_L, factor_M, 1900.0, 800.0, rate, dat_taps, kaiser, 2.0);
// delay input by phase shift of other filter to synchronize outputs
int cnt_delay = (dat_taps - 1) / (2 * factor_L);
int dat_delay = (cnt_taps - 1) / (2 * factor_L);
// minimize delay
if (cnt_delay > dat_delay) {
cnt_delay -= dat_delay;
dat_delay = 0;
} else {
dat_delay -= cnt_delay;
cnt_delay = 0;
}
short *pcm_buff = (short *)malloc(sizeof(short) * channels * factor_M);
// 0.1 second history + enough room for delay and taps
int buff_len = 0.1 * rate + factor_M
+ fmaxf(cnt_delay, dat_delay)
+ fmaxf(cnt_taps, dat_taps) / factor_L;
struct buffer *buffer = alloc_buffer(buff_len);
const double vis_sec = 0.03l;
const double hor_sync_sec = 0.009l;
const double cal_break_sec = 0.01l;
const double cal_leader_sec = 0.3l;
const double seperator_sec = 0.0045l;
const double sync_porch_sec = 0.003l;
const double porch_sec = 0.0015l;
const double y_sec = 0.088l;
const double uv_sec = 0.044l;
const double hor_sec = 0.15l;
int vis_len = vis_sec * drate;
int hor_sync_len = hor_sync_sec * drate;
int cal_break_len = cal_break_sec * drate;
int cal_leader_len = cal_leader_sec * drate;
int seperator_len = seperator_sec * drate;
int sync_porch_len = sync_porch_sec * drate;
int porch_len = porch_sec * drate;
int y_len = y_sec * drate;
int uv_len = uv_sec * drate;
int hor_len = hor_sec * drate;
int missing_sync = 0;
int seperator_correction = 0;
const int width = hor_len + sync_porch_len + 24;
const int height = 256;
struct img *img = 0;
int hor_ticks = 0;
int y_width = y_len;
int uv_width = uv_len;
uint8_t *y_pixel = malloc(y_width * 2);
memset(y_pixel, 0, y_width * 2);
uint8_t *uv_pixel = malloc(uv_width * 2);
memset(uv_pixel, 0, uv_width * 2);
for (int out = factor_L;; out++, hor_ticks++, cal_ticks++, vis_ticks++) {
if (out >= factor_L) {
out = 0;
if (!read_pcm(pcm, pcm_buff, factor_M))
break;
float *buff = 0;
for (int j = 0; j < factor_M; j++)
buff = do_buffer(buffer, (float)pcm_buff[j * channels] / 32767.0);
do_ddc(cnt_ddc, buff + cnt_delay, cnt_q);
do_ddc(dat_ddc, buff + dat_delay, dat_q);
}
float cnt_freq = fclampf(1200.0 + cargf(cnt_q[out] * conjf(cnt_last)) / (2.0 * M_PI * dstep), 1100.0, 1300.0);
float dat_freq = fclampf(1900.0 + cargf(dat_q[out] * conjf(dat_last)) / (2.0 * M_PI * dstep), 1500.0, 2300.0);
if (cabsf(cnt_q[out]) > cabsf(dat_q[out]))
dat_freq = 1500.0;
else
cnt_freq = 1300.0;
cnt_last = cnt_q[out];
dat_last = dat_q[out];
const float dat_a = 1.0 / (drate * 0.00238 + 1.0);
dat_avg = dat_a * dat_freq + (1.0 - dat_a) * dat_avg;
begin_vis_ss = fabsf(cnt_freq - 1200.0) < 50.0 ? begin_vis_ss + 1 : 0;
begin_vis_lo = fabsf(cnt_freq - 1300.0) < 50.0 ? begin_vis_lo + 1 : 0;
begin_vis_hi = fabsf(cnt_freq - 1100.0) < 50.0 ? begin_vis_hi + 1 : 0;
begin_hor_sync = fabsf(cnt_freq - 1200.0) < 50.0 ? begin_hor_sync + 1 : 0;
begin_cal_break = fabsf(cnt_freq - 1200.0) < 50.0 ? begin_cal_break + 1 : 0;
begin_cal_leader = fabsf(dat_avg - 1900.0) < 50.0 ? begin_cal_leader + 1 : 0;
// TODO: remove floats
const float vis_tolerance = 0.9;
const float sync_tolerance = 0.7;
const float break_tolerance = 0.7;
const float leader_tolerance = 0.3;
int vis_ss = begin_vis_ss >= (int)(vis_tolerance * vis_len) ? 1 : 0;
int vis_lo = begin_vis_lo >= (int)(vis_tolerance * vis_len) ? 1 : 0;
int vis_hi = begin_vis_hi >= (int)(vis_tolerance * vis_len) ? 1 : 0;
int cal_break = begin_cal_break >= (int)(break_tolerance * cal_break_len) ? 1 : 0;
int cal_leader = begin_cal_leader >= (int)(leader_tolerance * cal_leader_len) ? 1 : 0;
// we want a pulse at the falling edge
latch_sync = begin_hor_sync > (int)(sync_tolerance * hor_sync_len) ? 1 : latch_sync;
int hor_sync = (cnt_freq > 1299.0) && latch_sync;
latch_sync = hor_sync ? 0 : latch_sync;
// we only want a pulse for the bits
begin_vis_ss = vis_ss ? 0 : begin_vis_ss;
begin_vis_lo = vis_lo ? 0 : begin_vis_lo;
begin_vis_hi = vis_hi ? 0 : begin_vis_hi;
static int ticks = 0;
if (ticks++ < 5.0 * drate)
printf("%f %f %f %d %d %d %d %d %d %d %d\n", (float)ticks * dstep, dat_freq, cnt_freq,
50*hor_sync+950, 50*cal_leader+850, 50*cal_break+750,
50*vis_ss+650, 50*vis_lo+550, 50*vis_hi+450,
50*sep_evn+350, 50*sep_odd+250);
// only want to see a pulse
sep_evn = 0;
sep_odd = 0;
if (cal_leader && !cal_break && got_cal_break &&
(cal_ticks >= (cal_leader_len + cal_break_len) * leader_tolerance) &&
(cal_ticks <= (cal_leader_len + cal_break_len) * (2.0 - leader_tolerance))) {
vis_mode = 1;
vis_bit = -1;
dat_mode = 0;
first_hor_sync = 1;
got_cal_break = 0;
fprintf(stderr, "%s got calibration header\n", string_time("%F %T"));
}
if (cal_break && !cal_leader &&
cal_ticks >= (int)(cal_break_len * break_tolerance) &&
cal_ticks <= (int)(cal_break_len * (2.0 - break_tolerance)))
got_cal_break = 1;
if (cal_leader && !cal_break) {
cal_ticks = 0;
got_cal_break = 0;
}
if (vis_mode) {
if (vis_bit < 0) {
if (vis_ss) {
vis_ticks = 0;
vis_byte = 0;
vis_bit = 0;
dat_mode = 0;
}
} else if (vis_ticks <= (int)(10.0 * vis_len * (2.0 - vis_tolerance))) {
if (vis_ss) {
dat_mode = 1;
vis_mode = 0;
vis_bit = -1;
fprintf(stderr, "%s got VIS = 0x%x (complete)\n", string_time("%F %T"), vis_byte);
}
if (vis_bit < 8) {
if (vis_lo) vis_bit++;
if (vis_hi) vis_byte |= 1 << vis_bit++;
}
} else {
if (vis_bit >= 8) {
dat_mode = 1;
vis_mode = 0;
vis_bit = -1;
fprintf(stderr, "%s got VIS = 0x%x (missing stop bit)\n", string_time("%F %T"), vis_byte);
}
}
if (!vis_mode && vis_byte != 0x88) {
fprintf(stderr, "unsupported mode 0x%x, ignoring\n", vis_byte);
dat_mode = 0;
}
continue;
}
if (!dat_mode)
continue;
// we wait until first sync
if (first_hor_sync && !hor_sync)
continue;
// data comes after first sync
if (first_hor_sync && hor_sync) {
first_hor_sync = 0;
hor_ticks = 0;
y = 0;
odd = 0;
if (img) {
close_img(img);
fprintf(stderr, "%d missing sync's and %d corrections from seperator\n", missing_sync, seperator_correction);
missing_sync = 0;
seperator_correction = 0;
}
if (img_name) {
if (!open_img_write(&img, img_name, width, height))
return 1;
} else {
if (!open_img_write(&img, string_time("%F_%T.ppm"), width, height))
return 1;
}
continue;
}
if (hor_ticks < width) {
uint8_t *p = img->pixel + 3 * y * width + 3 * hor_ticks;
float dat_v = (dat_freq - 1500.0) / 800.0;
float cnt_v = (1300.0 - cnt_freq) / 200.0;
p[0] = fclampf(0.0, 255.0, 255.0 * dat_v);
p[1] = fclampf(0.0, 255.0, 255.0 * (dat_v + cnt_v));
p[2] = fclampf(0.0, 255.0, 255.0 * dat_v);
}
// if horizontal sync is too early, we reset to the beginning instead of ignoring
if (hor_sync && hor_ticks < (hor_len - sync_porch_len)) {
for (int i = 0; i < 4; i++) {
uint8_t *p = img->pixel + 3 * y * width + 3 * (width - i - 10);
p[0] = 255;
p[1] = 0;
p[2] = 255;
}
hor_ticks = 0;
}
// we always sync if sync pulse is where it should be.
if (hor_sync && (hor_ticks >= (hor_len - sync_porch_len)) &&
(hor_ticks < (hor_len + sync_porch_len))) {
uint8_t *p = img->pixel + 3 * y * width + 3 * hor_ticks;
p[0] = 255;
p[1] = 0;
p[2] = 0;
y++;
if (y == height) {
close_img(img);
fprintf(stderr, "%d missing sync's and %d corrections from seperator\n", missing_sync, seperator_correction);
img = 0;
dat_mode = 0;
missing_sync = 0;
seperator_correction = 0;
continue;
}
odd ^= 1;
hor_ticks = 0;
}
// if horizontal sync is missing, we extrapolate from last sync
if (hor_ticks >= (hor_len + sync_porch_len)) {
for (int i = 0; i < 4; i++) {
uint8_t *p = img->pixel + 3 * y * width + 3 * (width - i - 5);
p[0] = 255;
p[1] = 255;
p[2] = 0;
}
y++;
if (y == height) {
close_img(img);
fprintf(stderr, "%d missing sync's and %d corrections from seperator\n", missing_sync, seperator_correction);
img = 0;
dat_mode = 0;
missing_sync = 0;
seperator_correction = 0;
continue;
}
odd ^= 1;
missing_sync++;
hor_ticks -= hor_len;
// we are not at the pixels yet, so no correction here
}
static int sep_count = 0;
if ((hor_ticks > (sync_porch_len + y_len)) &&
(hor_ticks < (sync_porch_len + y_len + seperator_len)))
sep_count += dat_freq < 1900.0 ? 1 : -1;
// we try to correct from odd / even seperator
if (sep_count && (hor_ticks > (sync_porch_len + y_len + seperator_len))) {
if (sep_count > 0) {
sep_evn = 1;
if (odd) {
odd = 0;
seperator_correction++;
for (int i = 0; i < 4; i++) {
uint8_t *p = img->pixel + 3 * y * width + 3 * (width - i - 15);
p[0] = 255;
p[1] = 0;
p[2] = 0;
}
}
} else {
sep_odd = 1;
if (!odd) {
odd = 1;
seperator_correction++;
for (int i = 0; i < 4; i++) {
uint8_t *p = img->pixel + 3 * y * width + 3 * (width - i - 15);
p[0] = 0;
p[1] = 255;
p[2] = 0;
}
}
}
sep_count = 0;
}
if ((hor_ticks == sync_porch_len) ||
(hor_ticks == (sync_porch_len + y_len)) ||
(hor_ticks == (sync_porch_len + y_len + seperator_len)) ||
(hor_ticks == (sync_porch_len + y_len + seperator_len + porch_len)) ||
(hor_ticks == (sync_porch_len + y_len + seperator_len + porch_len + uv_len))) {
uint8_t *p = img->pixel + 3 * y * width + 3 * hor_ticks;
p[0] = 255;
p[1] = 0;
p[2] = 0;
}
}
if (img) {
close_img(img);
fprintf(stderr, "%d missing sync's and %d corrections from seperator\n", missing_sync, seperator_correction);
missing_sync = 0;
seperator_correction = 0;
}
close_pcm(pcm);
free_ddc(cnt_ddc);
free_ddc(dat_ddc);
free_buffer(buffer);
free(pcm_buff);
return 0;
}

483
decode.c 100644
Wyświetl plik

@ -0,0 +1,483 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <complex.h>
#include <time.h>
#include "pcm.h"
#include "ddc.h"
#include "buffer.h"
#include "yuv.h"
#include "utils.h"
#include "img.h"
void process_line(uint8_t *pixel, uint8_t *y_pixel, uint8_t *uv_pixel, int y_width, int uv_width, int width, int height, int n)
{
// we only process after 2 full lines: on odd lines
if (n % 2)
for (int y = n-1, l = 0; l < 2 && y < height; l++, y++) {
for (int x = 0; x < width; x++) {
#if DN && UP
uint8_t Y = y_pixel[x + l*y_width];
uint8_t U = uv_pixel[x/2 + uv_width];
uint8_t V = uv_pixel[x/2];
#else
float y_xf = (float)x * (float)y_width / (float)width;
float uv_xf = (float)x * (float)uv_width / (float)width;
int y_x0 = y_xf;
int uv_x0 = uv_xf;
int y_x1 = fclampf(0, y_width, y_xf + 1);
int uv_x1 = fclampf(0, uv_width, uv_xf + 1);
uint8_t Y = srgb(flerpf(linear(y_pixel[y_x0 + l*y_width]), linear(y_pixel[y_x1 + l*y_width]), y_xf - (float)y_x0));
uint8_t U = flerpf(uv_pixel[uv_x0 + uv_width], uv_pixel[uv_x1 + uv_width], uv_xf - (float)uv_x0);
uint8_t V = flerpf(uv_pixel[uv_x0], uv_pixel[uv_x1], uv_xf - (float)uv_x0);
#endif
uint8_t *p = pixel + 3 * width * y + 3 * x;
p[0] = R_YUV(Y, U, V);
p[1] = G_YUV(Y, U, V);
p[2] = B_YUV(Y, U, V);
}
}
}
int vis_code(int *reset, int *code, float cnt_freq, float drate)
{
const float tolerance = 0.9;
const float length = 0.03;
static int ss_ticks = 0;
static int lo_ticks = 0;
static int hi_ticks = 0;
ss_ticks = fabsf(cnt_freq - 1200.0) < 50.0 ? ss_ticks + 1 : 0;
lo_ticks = fabsf(cnt_freq - 1300.0) < 50.0 ? lo_ticks + 1 : 0;
hi_ticks = fabsf(cnt_freq - 1100.0) < 50.0 ? hi_ticks + 1 : 0;
int sig_ss = ss_ticks >= (int)(drate * tolerance * length) ? 1 : 0;
int sig_lo = lo_ticks >= (int)(drate * tolerance * length) ? 1 : 0;
int sig_hi = hi_ticks >= (int)(drate * tolerance * length) ? 1 : 0;
// we only want a pulse for the bits
ss_ticks = sig_ss ? 0 : ss_ticks;
lo_ticks = sig_lo ? 0 : lo_ticks;
hi_ticks = sig_hi ? 0 : hi_ticks;
static int ticks = -1;
ticks++;
static int bit = -1;
static int byte = 0;
if (*reset) {
bit = -1;
*reset = 0;
}
if (bit < 0) {
if (sig_ss) {
ticks = 0;
byte = 0;
bit = 0;
}
return 0;
}
if (ticks <= (int)(drate * 10.0 * length * (2.0 - tolerance))) {
if (sig_ss) {
bit = -1;
*code = byte;
return 1;
}
if (bit < 8) {
if (sig_lo) bit++;
if (sig_hi) byte |= 1 << bit++;
}
return 0;
}
// stop bit is missing.
if (bit >= 8) {
bit = -1;
*code = byte;
return 1;
}
// something went wrong and we shouldnt be here. return what we got anyway.
bit = -1;
*code = byte;
return 1;
}
int cal_header(float cnt_freq, float dat_freq, float drate)
{
const float break_len = 0.01;
const float leader_len = 0.3;
const float break_tolerance = 0.7;
const float leader_tolerance = 0.3;
static float dat_avg = 1900.0;
const float dat_a = 1.0 / (drate * 0.00238 + 1.0);
dat_avg = dat_a * dat_freq + (1.0 - dat_a) * dat_avg;
static int break_ticks = 0;
static int leader_ticks = 0;
break_ticks = fabsf(cnt_freq - 1200.0) < 50.0 ? break_ticks + 1 : 0;
leader_ticks = fabsf(dat_avg - 1900.0) < 50.0 ? leader_ticks + 1 : 0;
int sig_break = break_ticks >= (int)(drate * break_tolerance * break_len) ? 1 : 0;
int sig_leader = leader_ticks >= (int)(drate * leader_tolerance * leader_len) ? 1 : 0;
static int ticks = -1;
ticks++;
static int got_break = 0;
if (sig_leader && !sig_break && got_break &&
ticks >= (int)(drate * (leader_len + break_len) * leader_tolerance) &&
ticks <= (int)(drate * (leader_len + break_len) * (2.0 - leader_tolerance))) {
got_break = 0;
return 1;
}
if (sig_break && !sig_leader &&
ticks >= (int)(drate * break_len * break_tolerance) &&
ticks <= (int)(drate * break_len * (2.0 - break_tolerance)))
got_break = 1;
if (sig_leader && !sig_break) {
ticks = 0;
got_break = 0;
}
return 0;
}
int decode(int *reset, struct img **img, char *img_name, float cnt_freq, float dat_freq, float drate)
{
const int width = 320;
const int height = 240;
const double sync_porch_sec = 0.003l;
const double porch_sec = 0.0015l;
const double y_sec = 0.088l;
const double uv_sec = 0.044l;
const double hor_sec = 0.15l;
const double hor_sync_sec = 0.009l;
const double seperator_sec = 0.0045l;
const float sync_tolerance = 0.7;
static int sync_porch_len = 0;
static int porch_len = 0;
static int y_len = 0;
static int uv_len = 0;
static int hor_len = 0;
static int hor_sync_len = 0;
static int seperator_len = 0;
static int hor_ticks = 0;
static int latch_sync = 0;
static int y_width = 0;
static int uv_width = 0;
static uint8_t *y_pixel = 0;
static uint8_t *uv_pixel = 0;
static int init = 0;
if (!init) {
sync_porch_len = sync_porch_sec * drate;
porch_len = porch_sec * drate;
y_len = y_sec * drate;
uv_len = uv_sec * drate;
hor_len = hor_sec * drate;
hor_sync_len = hor_sync_sec * drate;
seperator_len = seperator_sec * drate;
y_width = y_len;
uv_width = uv_len;
y_pixel = malloc(y_width * 2);
memset(y_pixel, 0, y_width * 2);
uv_pixel = malloc(uv_width * 2);
memset(uv_pixel, 0, uv_width * 2);
init = 1;
}
hor_ticks = fabsf(cnt_freq - 1200.0) < 50.0 ? hor_ticks + 1 : 0;
// we want a pulse at the falling edge
latch_sync = hor_ticks > (int)(sync_tolerance * hor_sync_len) ? 1 : latch_sync;
int hor_sync = (cnt_freq > 1299.0) && latch_sync;
latch_sync = hor_sync ? 0 : latch_sync;
// we wait until first sync
if (*reset && !hor_sync)
return 0;
static int y = 0;
static int odd = 0;
static int y_pixel_x = 0;
static int uv_pixel_x = 0;
static int ticks = -1;
ticks++;
// data comes after first sync
if (*reset && hor_sync) {
*reset = 0;
ticks = 0;
y_pixel_x = 0;
uv_pixel_x = 0;
y = 0;
odd = 0;
if (*img)
close_img(*img);
if (img_name) {
if (!open_img_write(img, img_name, width, height))
exit(1);
} else {
if (!open_img_write(img, string_time("%F_%T.ppm"), width, height))
exit(1);
}
return 0;
}
// if horizontal sync is too early, we reset to the beginning instead of ignoring
if (hor_sync && (ticks < (hor_len - sync_porch_len))) {
ticks = 0;
y_pixel_x = 0;
uv_pixel_x = 0;
}
// we always sync if sync pulse is where it should be.
if (hor_sync && (ticks >= (hor_len - sync_porch_len)) &&
(ticks < (hor_len + sync_porch_len))) {
process_line((*img)->pixel, y_pixel, uv_pixel, y_width, uv_width, width, height, y++);
if (y == height) {
close_img(*img);
*img = 0;
return 1;
}
odd ^= 1;
ticks = 0;
y_pixel_x = 0;
uv_pixel_x = 0;
}
// if horizontal sync is missing, we extrapolate from last sync
if (ticks >= (hor_len + sync_porch_len)) {
process_line((*img)->pixel, y_pixel, uv_pixel, y_width, uv_width, width, height, y++);
if (y == height) {
close_img(*img);
*img = 0;
return 1;
}
odd ^= 1;
ticks -= hor_len;
// we are not at the pixels yet, so no correction here
y_pixel_x = 0;
uv_pixel_x = 0;
}
static int sep_count = 0;
if ((ticks > (sync_porch_len + y_len)) &&
(ticks < (sync_porch_len + y_len + seperator_len)))
sep_count += dat_freq < 1900.0 ? 1 : -1;
// we try to correct from odd / even seperator
if (sep_count && (ticks > (sync_porch_len + y_len + seperator_len))) {
odd = sep_count < 0;
sep_count = 0;
}
if ((y_pixel_x < y_width) && (ticks >= sync_porch_len))
y_pixel[y_pixel_x++ + (y % 2) * y_width] = fclampf(255.0 * (dat_freq - 1500.0) / 800.0, 0.0, 255.0);
if ((uv_pixel_x < uv_width) && (ticks >= (sync_porch_len + y_len + seperator_len + porch_len)))
uv_pixel[uv_pixel_x++ + odd * uv_width] = fclampf(255.0 * (dat_freq - 1500.0) / 800.0, 0.0, 255.0);
return 0;
}
int demodulate(struct pcm *pcm, float *cnt_freq, float *dat_freq, float *drate)
{
static float rate;
static int channels;
static int64_t factor_L;
static int64_t factor_M;
static int out;
static float dstep;
static float complex cnt_last = -I;
static float complex dat_last = -I;
static float complex *cnt_q;
static float complex *dat_q;
static struct ddc *cnt_ddc;
static struct ddc *dat_ddc;
static struct buffer *buffer;
static int cnt_delay;
static int dat_delay;
static short *pcm_buff;
static int init = 0;
if (!init) {
init = 1;
rate = rate_pcm(pcm);
channels = channels_pcm(pcm);
#if DN && UP
// 320 / 0.088 = 160 / 0.044 = 40000 / 11 = 3636.(36)~ pixels per second for Y, U and V
factor_L = 40000;
factor_M = 11 * rate;
int64_t factor_D = gcd(factor_L, factor_M);
factor_L /= factor_D;
factor_M /= factor_D;
#endif
#if DN && !UP
factor_L = 1;
// factor_M * step should be smaller than pixel length
factor_M = rate * 0.088 / 320.0 / 2;
#endif
#if !DN
factor_L = 1;
factor_M = 1;
#endif
// we want odd number of taps, 4 and 2 ms window length gives best results
int cnt_taps = 1 | (int)(rate * factor_L * 0.004);
int dat_taps = 1 | (int)(rate * factor_L * 0.002);
fprintf(stderr, "using %d and %d tap filter\n", cnt_taps, dat_taps);
*drate = rate * (float)factor_L / (float)factor_M;
dstep = 1.0 / *drate;
fprintf(stderr, "using factor of %ld/%ld, working at %.2fhz\n", factor_L, factor_M, *drate);
cnt_q = malloc(sizeof(float complex) * factor_L);
dat_q = malloc(sizeof(float complex) * factor_L);
// same factor to keep life simple and have accurate horizontal sync
cnt_ddc = alloc_ddc(factor_L, factor_M, 1200.0, 200.0, rate, cnt_taps, kaiser, 2.0);
dat_ddc = alloc_ddc(factor_L, factor_M, 1900.0, 800.0, rate, dat_taps, kaiser, 2.0);
// delay input by phase shift of other filter to synchronize outputs
cnt_delay = (dat_taps - 1) / (2 * factor_L);
dat_delay = (cnt_taps - 1) / (2 * factor_L);
// minimize delay
if (cnt_delay > dat_delay) {
cnt_delay -= dat_delay;
dat_delay = 0;
} else {
dat_delay -= cnt_delay;
cnt_delay = 0;
}
pcm_buff = (short *)malloc(sizeof(short) * channels * factor_M);
// 0.1 second history + enough room for delay and taps
int buff_len = 0.1 * rate + factor_M
+ fmaxf(cnt_delay, dat_delay)
+ fmaxf(cnt_taps, dat_taps) / factor_L;
buffer = alloc_buffer(buff_len);
// start immediately below
out = factor_L;
}
if (out >= factor_L) {
out = 0;
if (!read_pcm(pcm, pcm_buff, factor_M)) {
init = 0;
free(pcm_buff);
free_ddc(cnt_ddc);
free_ddc(dat_ddc);
free_buffer(buffer);
return 0;
}
float *buff = 0;
for (int j = 0; j < factor_M; j++)
buff = do_buffer(buffer, (float)pcm_buff[j * channels] / 32767.0);
do_ddc(cnt_ddc, buff + cnt_delay, cnt_q);
do_ddc(dat_ddc, buff + dat_delay, dat_q);
}
*cnt_freq = fclampf(1200.0 + cargf(cnt_q[out] * conjf(cnt_last)) / (2.0 * M_PI * dstep), 1100.0, 1300.0);
*dat_freq = fclampf(1900.0 + cargf(dat_q[out] * conjf(dat_last)) / (2.0 * M_PI * dstep), 1500.0, 2300.0);
if (cabsf(cnt_q[out]) > cabsf(dat_q[out]))
*dat_freq = 1500.0;
else
*cnt_freq = 1300.0;
cnt_last = cnt_q[out];
dat_last = dat_q[out];
out++;
return 1;
}
int main(int argc, char **argv)
{
struct pcm *pcm;
char *pcm_name = "default";
char *img_name = 0;
if (argc != 1)
pcm_name = argv[1];
if (argc == 3)
img_name = argv[2];
if (!open_pcm_read(&pcm, pcm_name))
return 1;
info_pcm(pcm);
float rate = rate_pcm(pcm);
if (rate * 0.088 < 320.0) {
fprintf(stderr, "%.0fhz samplerate too low\n", rate);
return 1;
}
int channels = channels_pcm(pcm);
if (channels > 1)
fprintf(stderr, "using first of %d channels\n", channels);
int vis_mode = 0;
int dat_mode = 0;
int vis_reset = 0;
int dat_reset = 0;
struct img *img = 0;
float cnt_freq = 0.0;
float dat_freq = 0.0;
float drate = 0.0;
while (demodulate(pcm, &cnt_freq, &dat_freq, &drate)) {
if (cal_header(cnt_freq, dat_freq, drate)) {
vis_mode = 1;
vis_reset = 1;
dat_mode = 0;
dat_reset = 1;
fprintf(stderr, "%s got calibration header\n", string_time("%F %T"));
}
if (vis_mode) {
int code = 0;
if (!vis_code(&vis_reset, &code, cnt_freq, drate))
continue;
if (0x88 != code) {
fprintf(stderr, "%s got unsupported VIS 0x%x, ignoring\n", string_time("%F %T"), code);
vis_mode = 0;
continue;
}
fprintf(stderr, "%s got VIS = 0x%x\n", string_time("%F %T"), code);
dat_mode = 1;
dat_reset = 1;
vis_mode = 0;
}
if (dat_mode) {
if (decode(&dat_reset, &img, img_name, cnt_freq, dat_freq, drate))
dat_mode = 0;
}
}
if (img)
close_img(img);
close_pcm(pcm);
return 0;
}

226
encode.c 100644
Wyświetl plik

@ -0,0 +1,226 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#include <complex.h>
#include <limits.h>
#include "yuv.h"
#include "utils.h"
#include "pcm.h"
#include "img.h"
struct img *img;
struct pcm *pcm;
complex float nco;
float hz2rad;
int channels;
short *buff;
int rate = 48000;
const double sync_porch_sec = 0.003l;
const double porch_sec = 0.0015l;
const double y_sec = 0.088l;
const double uv_sec = 0.044l;
const double hor_sync_sec = 0.009l;
const double seperator_sec = 0.0045l;
int sync_porch_len = 0;
int porch_len = 0;
int y_len = 0;
int uv_len = 0;
int hor_sync_len = 0;
int seperator_len = 0;
int add_sample(float val)
{
for (int i = 0; i < channels; i++)
buff[i] = (float)SHRT_MAX * val;
return write_pcm(pcm, buff, 1);
}
void add_freq(float freq)
{
add_sample(creal(nco));
nco *= cexpf(freq * hz2rad * I);
}
void hor_sync()
{
for (int ticks = 0; ticks < hor_sync_len; ticks++)
add_freq(1200.0);
}
void sync_porch()
{
for (int ticks = 0; ticks < sync_porch_len; ticks++)
add_freq(1500.0);
}
void porch()
{
for (int ticks = 0; ticks < porch_len; ticks++)
add_freq(1900.0);
}
void even_seperator()
{
for (int ticks = 0; ticks < seperator_len; ticks++)
add_freq(1500.0);
}
void odd_seperator()
{
for (int ticks = 0; ticks < seperator_len; ticks++)
add_freq(2300.0);
}
void y_scan(int y)
{
for (int ticks = 0; ticks < y_len; ticks++) {
float xf = fclampf((320.0 * ticks) / (float)y_len, 0.0, 319.0);
int x0 = xf;
int x1 = fclampf(x0 + 1, 0.0, 319.0);
int off0 = 3 * y * img->width + 3 * x0;
int off1 = 3 * y * img->width + 3 * x1;
float R0 = linear(img->pixel[off0 + 0]);
float G0 = linear(img->pixel[off0 + 1]);
float B0 = linear(img->pixel[off0 + 2]);
float R1 = linear(img->pixel[off1 + 0]);
float G1 = linear(img->pixel[off1 + 1]);
float B1 = linear(img->pixel[off1 + 2]);
uint8_t R = srgb(flerpf(R0, R1, xf - (float)x0));
uint8_t G = srgb(flerpf(G0, G1, xf - (float)x0));
uint8_t B = srgb(flerpf(B0, B1, xf - (float)x0));
add_freq(1500.0 + 800.0 * Y_RGB(R, G, B) / 255.0);
}
}
void v_scan(int y)
{
for (int ticks = 0; ticks < uv_len; ticks++) {
float xf = fclampf((160.0 * ticks) / (float)uv_len, 0.0, 159.0);
int x0 = xf;
int x1 = fclampf(x0 + 1, 0.0, 159.0);
int evn0 = 3 * y * img->width + 6 * x0;
int evn1 = 3 * y * img->width + 6 * x1;
int odd0 = 3 * (y + 1) * img->width + 6 * x0;
int odd1 = 3 * (y + 1) * img->width + 6 * x1;
float R0 = (linear(img->pixel[evn0 + 0]) + linear(img->pixel[odd0 + 0]) + linear(img->pixel[evn0 + 3]) + linear(img->pixel[odd0 + 3])) / 4;
float G0 = (linear(img->pixel[evn0 + 1]) + linear(img->pixel[odd0 + 1]) + linear(img->pixel[evn0 + 4]) + linear(img->pixel[odd0 + 4])) / 4;
float B0 = (linear(img->pixel[evn0 + 2]) + linear(img->pixel[odd0 + 2]) + linear(img->pixel[evn0 + 5]) + linear(img->pixel[odd0 + 5])) / 4;
float R1 = (linear(img->pixel[evn1 + 0]) + linear(img->pixel[odd1 + 0]) + linear(img->pixel[evn1 + 3]) + linear(img->pixel[odd1 + 3])) / 4;
float G1 = (linear(img->pixel[evn1 + 1]) + linear(img->pixel[odd1 + 1]) + linear(img->pixel[evn1 + 4]) + linear(img->pixel[odd1 + 4])) / 4;
float B1 = (linear(img->pixel[evn1 + 2]) + linear(img->pixel[odd1 + 2]) + linear(img->pixel[evn1 + 5]) + linear(img->pixel[odd1 + 5])) / 4;
uint8_t R = srgb(flerpf(R0, R1, xf - (float)x0));
uint8_t G = srgb(flerpf(G0, G1, xf - (float)x0));
uint8_t B = srgb(flerpf(B0, B1, xf - (float)x0));
add_freq(1500.0 + 800.0 * V_RGB(R, G, B) / 255.0);
}
}
void u_scan(int y)
{
for (int ticks = 0; ticks < uv_len; ticks++) {
float xf = fclampf((160.0 * ticks) / (float)uv_len, 0.0, 159.0);
int x0 = xf;
int x1 = fclampf(x0 + 1, 0.0, 159.0);
int evn0 = 3 * (y - 1) * img->width + 6 * x0;
int evn1 = 3 * (y - 1) * img->width + 6 * x1;
int odd0 = 3 * y * img->width + 6 * x0;
int odd1 = 3 * y * img->width + 6 * x1;
float R0 = (linear(img->pixel[evn0 + 0]) + linear(img->pixel[odd0 + 0]) + linear(img->pixel[evn0 + 3]) + linear(img->pixel[odd0 + 3])) / 4;
float G0 = (linear(img->pixel[evn0 + 1]) + linear(img->pixel[odd0 + 1]) + linear(img->pixel[evn0 + 4]) + linear(img->pixel[odd0 + 4])) / 4;
float B0 = (linear(img->pixel[evn0 + 2]) + linear(img->pixel[odd0 + 2]) + linear(img->pixel[evn0 + 5]) + linear(img->pixel[odd0 + 5])) / 4;
float R1 = (linear(img->pixel[evn1 + 0]) + linear(img->pixel[odd1 + 0]) + linear(img->pixel[evn1 + 3]) + linear(img->pixel[odd1 + 3])) / 4;
float G1 = (linear(img->pixel[evn1 + 1]) + linear(img->pixel[odd1 + 1]) + linear(img->pixel[evn1 + 4]) + linear(img->pixel[odd1 + 4])) / 4;
float B1 = (linear(img->pixel[evn1 + 2]) + linear(img->pixel[odd1 + 2]) + linear(img->pixel[evn1 + 5]) + linear(img->pixel[odd1 + 5])) / 4;
uint8_t R = srgb(flerpf(R0, R1, xf - (float)x0));
uint8_t G = srgb(flerpf(G0, G1, xf - (float)x0));
uint8_t B = srgb(flerpf(B0, B1, xf - (float)x0));
add_freq(1500.0 + 800.0 * U_RGB(R, G, B) / 255.0);
}
}
int main(int argc, char **argv)
{
if (argc < 2) {
fprintf(stderr, "usage: %s <input.ppm> <output.wav|default|hw:?,?> <rate>\n", argv[0]);
return 1;
}
if (!open_img_read(&img, argv[1]))
return 1;
if (320 != img->width || 240 != img->height) {
fprintf(stderr, "image not 320x240\n");
close_img(img);
return 1;
}
char *pcm_name = "default";
if (argc > 2)
pcm_name = argv[2];
if (argc > 3)
rate = atoi(argv[3]);
if (!open_pcm_write(&pcm, pcm_name, rate, 1, 37.5))
return 1;
rate = rate_pcm(pcm);
channels = channels_pcm(pcm);
sync_porch_len = rate * sync_porch_sec;
porch_len = rate * porch_sec;
y_len = rate * y_sec;
uv_len = rate * uv_sec;
hor_sync_len = rate * hor_sync_sec;
seperator_len = rate * seperator_sec;
// fprintf(stderr, "%d %d %d %d %d %d\n", sync_porch_len, porch_len, y_len, uv_len, hor_sync_len, seperator_len);
buff = (short *)malloc(sizeof(short)*channels);
info_pcm(pcm);
if (fabsf(porch_sec * rate - porch_len) > 0.0001)
fprintf(stderr, "this rate will not give accurate (smooth) results.\ntry 40000Hz and resample to %dHz\n", rate);
hz2rad = (2.0 * M_PI) / rate;
nco = -I * 0.7;
enum { N = 13 };
float seq_freq[N] = { 1900.0, 1200.0, 1900.0, 1200.0, 1300.0, 1300.0, 1300.0, 1100.0, 1300.0, 1300.0, 1300.0, 1100.0, 1200.0 };
float seq_time[N] = { 0.3, 0.01, 0.3, 0.03, 0.03, 0.03, 0.03, 0.03, 0.03, 0.03, 0.03, 0.03, 0.03 };
for (int ticks = 0; ticks < (int)(0.3 * rate); ticks++)
add_sample(0.0);
for (int i = 0; i < N; i++)
for (int ticks = 0; ticks < (int)(seq_time[i] * rate); ticks++)
add_freq(seq_freq[i]);
for (int y = 0; y < img->height; y++) {
// EVEN LINES
hor_sync();
sync_porch();
y_scan(y);
even_seperator();
porch();
v_scan(y);
// ODD LINES
y++;
hor_sync();
sync_porch();
y_scan(y);
odd_seperator();
porch();
u_scan(y);
}
while (add_sample(0.0));
close_pcm(pcm);
close_img(img);
return 0;
}

Wyświetl plik

@ -1,21 +0,0 @@
# Project-wide Gradle settings.
# IDE (e.g. Android Studio) users:
# Gradle settings configured through the IDE *will override*
# any settings specified in this file.
# For more details on how to configure your build environment visit
# http://www.gradle.org/docs/current/userguide/build_environment.html
# Specifies the JVM arguments used for the daemon process.
# The setting is particularly useful for tweaking memory settings.
org.gradle.jvmargs=-Xmx2048m -Dfile.encoding=UTF-8
# When configured, Gradle will run in incubating parallel mode.
# This option should only be used with decoupled projects. For more details, visit
# https://developer.android.com/r/tools/gradle-multi-project-decoupled-projects
# org.gradle.parallel=true
# AndroidX package structure to make it clearer which packages are bundled with the
# Android operating system, and which are packaged with your app's APK
# https://developer.android.com/topic/libraries/support-library/androidx-rn
android.useAndroidX=true
# Enables namespacing of each library's R class so that its R class includes only the
# resources declared in the library itself and none from the library's dependencies,
# thereby reducing the size of the R class for that library
android.nonTransitiveRClass=true

Wyświetl plik

@ -1,22 +0,0 @@
[versions]
agp = "8.3.2"
junit = "4.13.2"
junitVersion = "1.1.5"
espressoCore = "3.5.1"
appcompat = "1.6.1"
material = "1.11.0"
activity = "1.8.0"
constraintlayout = "2.1.4"
[libraries]
junit = { group = "junit", name = "junit", version.ref = "junit" }
ext-junit = { group = "androidx.test.ext", name = "junit", version.ref = "junitVersion" }
espresso-core = { group = "androidx.test.espresso", name = "espresso-core", version.ref = "espressoCore" }
appcompat = { group = "androidx.appcompat", name = "appcompat", version.ref = "appcompat" }
material = { group = "com.google.android.material", name = "material", version.ref = "material" }
activity = { group = "androidx.activity", name = "activity", version.ref = "activity" }
constraintlayout = { group = "androidx.constraintlayout", name = "constraintlayout", version.ref = "constraintlayout" }
[plugins]
androidApplication = { id = "com.android.application", version.ref = "agp" }

Plik binarny nie jest wyświetlany.

Wyświetl plik

@ -1,6 +0,0 @@
#Fri Apr 12 11:35:07 CEST 2024
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-8.4-bin.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists

185
gradlew vendored
Wyświetl plik

@ -1,185 +0,0 @@
#!/usr/bin/env sh
#
# Copyright 2015 the original author or authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
##############################################################################
##
## Gradle start up script for UN*X
##
##############################################################################
# Attempt to set APP_HOME
# Resolve links: $0 may be a link
PRG="$0"
# Need this for relative symlinks.
while [ -h "$PRG" ] ; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
PRG="$link"
else
PRG=`dirname "$PRG"`"/$link"
fi
done
SAVED="`pwd`"
cd "`dirname \"$PRG\"`/" >/dev/null
APP_HOME="`pwd -P`"
cd "$SAVED" >/dev/null
APP_NAME="Gradle"
APP_BASE_NAME=`basename "$0"`
# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"'
# Use the maximum available, or set MAX_FD != -1 to use that value.
MAX_FD="maximum"
warn () {
echo "$*"
}
die () {
echo
echo "$*"
echo
exit 1
}
# OS specific support (must be 'true' or 'false').
cygwin=false
msys=false
darwin=false
nonstop=false
case "`uname`" in
CYGWIN* )
cygwin=true
;;
Darwin* )
darwin=true
;;
MINGW* )
msys=true
;;
NONSTOP* )
nonstop=true
;;
esac
CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar
# Determine the Java command to use to start the JVM.
if [ -n "$JAVA_HOME" ] ; then
if [ -x "$JAVA_HOME/jre/sh/java" ] ; then
# IBM's JDK on AIX uses strange locations for the executables
JAVACMD="$JAVA_HOME/jre/sh/java"
else
JAVACMD="$JAVA_HOME/bin/java"
fi
if [ ! -x "$JAVACMD" ] ; then
die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
else
JAVACMD="java"
which java >/dev/null 2>&1 || die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
Please set the JAVA_HOME variable in your environment to match the
location of your Java installation."
fi
# Increase the maximum file descriptors if we can.
if [ "$cygwin" = "false" -a "$darwin" = "false" -a "$nonstop" = "false" ] ; then
MAX_FD_LIMIT=`ulimit -H -n`
if [ $? -eq 0 ] ; then
if [ "$MAX_FD" = "maximum" -o "$MAX_FD" = "max" ] ; then
MAX_FD="$MAX_FD_LIMIT"
fi
ulimit -n $MAX_FD
if [ $? -ne 0 ] ; then
warn "Could not set maximum file descriptor limit: $MAX_FD"
fi
else
warn "Could not query maximum file descriptor limit: $MAX_FD_LIMIT"
fi
fi
# For Darwin, add options to specify how the application appears in the dock
if $darwin; then
GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\""
fi
# For Cygwin or MSYS, switch paths to Windows format before running java
if [ "$cygwin" = "true" -o "$msys" = "true" ] ; then
APP_HOME=`cygpath --path --mixed "$APP_HOME"`
CLASSPATH=`cygpath --path --mixed "$CLASSPATH"`
JAVACMD=`cygpath --unix "$JAVACMD"`
# We build the pattern for arguments to be converted via cygpath
ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null`
SEP=""
for dir in $ROOTDIRSRAW ; do
ROOTDIRS="$ROOTDIRS$SEP$dir"
SEP="|"
done
OURCYGPATTERN="(^($ROOTDIRS))"
# Add a user-defined pattern to the cygpath arguments
if [ "$GRADLE_CYGPATTERN" != "" ] ; then
OURCYGPATTERN="$OURCYGPATTERN|($GRADLE_CYGPATTERN)"
fi
# Now convert the arguments - kludge to limit ourselves to /bin/sh
i=0
for arg in "$@" ; do
CHECK=`echo "$arg"|egrep -c "$OURCYGPATTERN" -`
CHECK2=`echo "$arg"|egrep -c "^-"` ### Determine if an option
if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then ### Added a condition
eval `echo args$i`=`cygpath --path --ignore --mixed "$arg"`
else
eval `echo args$i`="\"$arg\""
fi
i=`expr $i + 1`
done
case $i in
0) set -- ;;
1) set -- "$args0" ;;
2) set -- "$args0" "$args1" ;;
3) set -- "$args0" "$args1" "$args2" ;;
4) set -- "$args0" "$args1" "$args2" "$args3" ;;
5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;;
6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;;
7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;;
8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;;
9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;;
esac
fi
# Escape application args
save () {
for i do printf %s\\n "$i" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/' \\\\/" ; done
echo " "
}
APP_ARGS=`save "$@"`
# Collect all arguments for the java command, following the shell quoting and substitution rules
eval set -- $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS "\"-Dorg.gradle.appname=$APP_BASE_NAME\"" -classpath "\"$CLASSPATH\"" org.gradle.wrapper.GradleWrapperMain "$APP_ARGS"
exec "$JAVACMD" "$@"

89
gradlew.bat vendored
Wyświetl plik

@ -1,89 +0,0 @@
@rem
@rem Copyright 2015 the original author or authors.
@rem
@rem Licensed under the Apache License, Version 2.0 (the "License");
@rem you may not use this file except in compliance with the License.
@rem You may obtain a copy of the License at
@rem
@rem https://www.apache.org/licenses/LICENSE-2.0
@rem
@rem Unless required by applicable law or agreed to in writing, software
@rem distributed under the License is distributed on an "AS IS" BASIS,
@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
@rem See the License for the specific language governing permissions and
@rem limitations under the License.
@rem
@if "%DEBUG%" == "" @echo off
@rem ##########################################################################
@rem
@rem Gradle startup script for Windows
@rem
@rem ##########################################################################
@rem Set local scope for the variables with windows NT shell
if "%OS%"=="Windows_NT" setlocal
set DIRNAME=%~dp0
if "%DIRNAME%" == "" set DIRNAME=.
set APP_BASE_NAME=%~n0
set APP_HOME=%DIRNAME%
@rem Resolve any "." and ".." in APP_HOME to make it shorter.
for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m"
@rem Find java.exe
if defined JAVA_HOME goto findJavaFromJavaHome
set JAVA_EXE=java.exe
%JAVA_EXE% -version >NUL 2>&1
if "%ERRORLEVEL%" == "0" goto execute
echo.
echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:findJavaFromJavaHome
set JAVA_HOME=%JAVA_HOME:"=%
set JAVA_EXE=%JAVA_HOME%/bin/java.exe
if exist "%JAVA_EXE%" goto execute
echo.
echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
echo.
echo Please set the JAVA_HOME variable in your environment to match the
echo location of your Java installation.
goto fail
:execute
@rem Setup the command line
set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar
@rem Execute Gradle
"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %*
:end
@rem End local scope for the variables with windows NT shell
if "%ERRORLEVEL%"=="0" goto mainEnd
:fail
rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of
rem the _cmd.exe /c_ return code!
if not "" == "%GRADLE_EXIT_CONSOLE%" exit 1
exit /b 1
:mainEnd
if "%OS%"=="Windows_NT" endlocal
:omega

34
img.c 100644
Wyświetl plik

@ -0,0 +1,34 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <stdlib.h>
#include <string.h>
#include "img.h"
#include "ppm.h"
#include "sdl.h"
void close_img(struct img *img)
{
img->close(img);
}
int open_img_read(struct img **p, char *name)
{
if (strstr(name, ".ppm") == (name + (strlen(name) - strlen(".ppm"))))
return open_ppm_read(p, name);
return 0;
}
int open_img_write(struct img **p, char *name, int width, int height)
{
if (strstr(name, ".ppm") == (name + (strlen(name) - strlen(".ppm"))))
return open_ppm_write(p, name, width, height);
if (strstr(name, "sdl:") == name)
return open_sdl_write(p, name, width, height);
return 0;
}

27
img.h 100644
Wyświetl plik

@ -0,0 +1,27 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef IMG_H
#define IMG_H
#include <stdint.h>
struct img {
void (*close)(struct img *);
uint8_t *pixel;
void *data;
int width;
int height;
};
void close_img(struct img *);
int open_img_read(struct img **, char *);
int open_img_write(struct img **, char *, int, int);
#endif

107
mmap_file.c 100644
Wyświetl plik

@ -0,0 +1,107 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <stdio.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <unistd.h>
#include "mmap_file.h"
int mmap_file_ro(void **p, char *name, size_t *size)
{
*size = 0;
int fd = open(name, O_RDONLY);
if (fd == -1) {
perror("open");
return 0;
}
struct stat sb;
if (fstat(fd, &sb) == -1) {
perror("fstat");
close(fd);
return 0;
}
if (!S_ISREG(sb.st_mode)) {
fprintf(stderr, "%s not a file\n", name);
close(fd);
return 0;
}
*p = mmap(0, sb.st_size, PROT_READ, MAP_SHARED, fd, 0);
if (*p == MAP_FAILED) {
perror("mmap");
close(fd);
return 0;
}
if (close(fd) == -1) {
perror ("close");
return 0;
}
*size = sb.st_size;
return 1;
}
int mmap_file_rw(void **p, char *name, size_t size)
{
int fd = open(name, O_RDWR|O_CREAT|O_TRUNC, S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH);
if (fd == -1) {
perror("open");
return 0;
}
struct stat sb;
if (fstat(fd, &sb) == -1) {
perror("fstat");
close(fd);
return 0;
}
if (!S_ISREG(sb.st_mode)) {
fprintf(stderr, "%s not a file\n", name);
close(fd);
return 0;
}
if (lseek(fd, size - 1, SEEK_SET) == -1) {
perror("lseek");
close(fd);
return 0;
}
if (write(fd, "", 1) != 1) {
perror("write");
close(fd);
return 0;
}
*p = mmap(0, size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
if (*p == MAP_FAILED) {
perror("mmap");
close(fd);
return 0;
}
if (close(fd) == -1) {
perror ("close");
return 0;
}
return 1;
}
int munmap_file(void *p, size_t size)
{
if (munmap(p, size) == -1) {
perror("munmap");
return 0;
}
return 1;
}

13
mmap_file.h 100644
Wyświetl plik

@ -0,0 +1,13 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef MMAP_FILE_H
#define MMAP_FILE_H
int mmap_file_ro(void **, char *, size_t *);
int mmap_file_rw(void **, char *, size_t);
int munmap_file(void *, size_t);
#endif

63
pcm.c 100644
Wyświetl plik

@ -0,0 +1,63 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <stdint.h>
#include <string.h>
#include <stdlib.h>
#include "pcm.h"
#include "alsa.h"
#include "wav.h"
void close_pcm(struct pcm *pcm)
{
pcm->close(pcm);
}
void info_pcm(struct pcm *pcm)
{
pcm->info(pcm);
}
int rate_pcm(struct pcm *pcm)
{
return pcm->rate(pcm);
}
int channels_pcm(struct pcm *pcm)
{
return pcm->channels(pcm);
}
int read_pcm(struct pcm *pcm, short *buff, int frames)
{
return pcm->rw(pcm, buff, frames);
}
int write_pcm(struct pcm *pcm, short *buff, int frames)
{
return pcm->rw(pcm, buff, frames);
}
int open_pcm_read(struct pcm **p, char *name)
{
if (strstr(name, "plughw:") == name || strstr(name, "hw:") == name || strstr(name, "default") == name)
return open_alsa_read(p, name);
if (strstr(name, ".wav") == (name + (strlen(name) - strlen(".wav"))))
return open_wav_read(p, name);
return 0;
}
int open_pcm_write(struct pcm **p, char *name, int rate, int channels, float seconds)
{
if (strstr(name, "plughw:") == name || strstr(name, "hw:") == name || strstr(name, "default") == name)
return open_alsa_write(p, name, rate, channels, seconds);
if (strstr(name, ".wav") == (name + (strlen(name) - strlen(".wav"))))
return open_wav_write(p, name, rate, channels, seconds);
return 0;
}

31
pcm.h 100644
Wyświetl plik

@ -0,0 +1,31 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef PCM_H
#define PCM_H
struct pcm {
void (*close)(struct pcm *);
void (*info)(struct pcm *);
int (*rate)(struct pcm *);
int (*channels)(struct pcm *);
int (*rw)(struct pcm *, short *, int);
void *data;
};
void close_pcm(struct pcm *);
void info_pcm(struct pcm *);
int rate_pcm(struct pcm *);
int channels_pcm(struct pcm *);
int read_pcm(struct pcm *, short *, int);
int write_pcm(struct pcm *, short *, int);
int open_pcm_read(struct pcm **, char *);
int open_pcm_write(struct pcm **, char *, int, int, float);
#endif

116
ppm.c 100644
Wyświetl plik

@ -0,0 +1,116 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include "mmap_file.h"
#include "img.h"
struct ppm {
struct img base;
void *p;
size_t size;
};
void close_ppm(struct img *img)
{
struct ppm *ppm = (struct ppm *)(img->data);
munmap_file(ppm->p, ppm->size);
free(ppm);
}
int open_ppm_read(struct img **p, char *name) {
struct ppm *ppm = (struct ppm *)malloc(sizeof(struct ppm));
ppm->base.close = close_ppm;
ppm->base.data = (void *)ppm;
if (!mmap_file_ro(&(ppm->p), name, &(ppm->size))) {
fprintf(stderr, "couldnt open image file %s\n", name);
free(ppm);
return 0;
}
char *chr = (char *)ppm->p;
size_t index = 0;
if (ppm->size < 10 || chr[index++] != 'P' || chr[index++] != '6') {
fprintf(stderr, "unsupported image file\n");
munmap_file(ppm->p, ppm->size);
free(ppm);
return 0;
}
char buff[16];
int integer[3];
for (int n = 0; n < 3; n++) {
for (; index < ppm->size; index++) {
if (chr[index] >= '0' && chr[index] <= '9')
break;
if (chr[index] == '#')
for (; index < ppm->size && chr[index] != '\n'; index++);
}
for (int i = 0; i < 16; i++)
buff[i] = 0;
for (int i = 0; index < ppm->size && i < 15; i++, index++) {
buff[i] = chr[index];
if (buff[i] < '0' || buff[i] > '9') {
buff[i] = 0;
break;
}
}
if (index >= ppm->size) {
fprintf(stderr, "broken image file\n");
munmap_file(ppm->p, ppm->size);
free(ppm);
return 0;
}
integer[n] = atoi(buff);
index++;
}
if (0 == integer[0] || 0 == integer[1] || integer[2] != 255) {
fprintf(stderr, "unsupported image file\n");
munmap_file(ppm->p, ppm->size);
free(ppm);
return 0;
}
ppm->base.width = integer[0];
ppm->base.height = integer[1];
ppm->base.pixel = (uint8_t *)ppm->p + index;
*p = &(ppm->base);
return 1;
}
int open_ppm_write(struct img **p, char *name, int width, int height)
{
struct ppm *ppm = (struct ppm *)malloc(sizeof(struct ppm));
ppm->base.close = close_ppm;
ppm->base.data = (void *)ppm;
char head[32];
snprintf(head, 32, "P6 %d %d 255\n", width, height);
ppm->size = strlen(head) + width * height * 3;
if (!mmap_file_rw(&(ppm->p), name, ppm->size)) {
fprintf(stderr, "couldnt open image file %s\n", name);
free(ppm);
return 0;
}
memcpy(ppm->p, head, strlen(head));
ppm->base.pixel = (uint8_t *)ppm->p + strlen(head);
memset(ppm->base.pixel, 0, width * height * 3);
*p = &(ppm->base);
return 1;
}

14
ppm.h 100644
Wyświetl plik

@ -0,0 +1,14 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef PPM_H
#define PPM_H
#include "img.h"
int open_ppm_read(struct img **, char *);
int open_ppm_write(struct img **, char *, int, int);
#endif

162
sdl.c 100644
Wyświetl plik

@ -0,0 +1,162 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <SDL.h>
#include "img.h"
struct data {
uint8_t *pixel;
int quit;
int stop;
int busy;
int okay;
SDL_Surface *screen;
SDL_Thread *thread;
};
struct sdl {
struct img base;
struct data *data;
};
void handle_events()
{
SDL_Event event;
while (SDL_PollEvent(&event)) {
switch (event.type) {
case SDL_KEYDOWN:
switch (event.key.keysym.sym) {
case SDLK_q:
exit(0);
break;
case SDLK_ESCAPE:
exit(0);
break;
default:
break;
}
break;
case SDL_QUIT:
exit(0);
break;
default:
break;
}
}
}
int update_sdl(void *ptr)
{
struct data *data = (struct data *)ptr;
while (!data->quit) {
if (data->stop) {
data->busy = 0;
SDL_Delay(100);
continue;
}
data->busy = 1;
if (data->okay) {
for (int i = 0; i < data->screen->w * data->screen->h; i++) {
uint8_t *src = data->pixel + i * 3;
uint8_t *dst = data->screen->pixels + i * 4;
dst[0] = src[2];
dst[1] = src[1];
dst[2] = src[0];
dst[3] = 0;
}
}
SDL_Flip(data->screen);
SDL_Delay(100);
handle_events();
}
return 0;
}
void close_sdl(struct img *img)
{
struct sdl *sdl = (struct sdl *)(img->data);
sdl->data->okay = 0;
sdl->data->stop = 1;
while (sdl->data->busy)
SDL_Delay(50);
sdl->data->stop = 0;
free(sdl->base.pixel);
free(sdl);
}
int open_sdl_write(struct img **p, char *name, int width, int height)
{
struct sdl *sdl = (struct sdl *)malloc(sizeof(struct sdl));
sdl->base.close = close_sdl;
sdl->base.data = (void *)sdl;
static struct data *data = 0;
if (!data) {
data = (struct data *)malloc(sizeof(struct data));
data->quit = 0;
data->stop = 1;
data->okay = 0;
data->busy = 0;
atexit(SDL_Quit);
SDL_Init(SDL_INIT_VIDEO);
data->thread = SDL_CreateThread(update_sdl, data);
if (!data->thread) {
fprintf(stderr, "Unable to create thread: %s\n", SDL_GetError());
SDL_Quit();
free(data);
free(sdl);
return 0;
}
}
data->okay = 0;
data->stop = 1;
while (data->busy)
SDL_Delay(50);
data->screen = SDL_SetVideoMode(width, height, 32, SDL_SWSURFACE);
if (!data->screen) {
fprintf(stderr, "couldnt open %s window %dx%d@32\n", name, width, height);
data->quit = 1;
SDL_WaitThread(data->thread, 0);
SDL_Quit();
free(data);
free(sdl);
return 0;
}
if (data->screen->format->BytesPerPixel != 4 || data->screen->w != width || data->screen->h != height) {
fprintf(stderr, "requested %dx%d@32 but got %s window %dx%d@32\n", width, height, name, data->screen->w, data->screen->h);
data->quit = 1;
SDL_WaitThread(data->thread, 0);
SDL_Quit();
free(data);
free(sdl);
return 0;
}
SDL_WM_SetCaption("robot36", "robot36");
SDL_EnableKeyRepeat(SDL_DEFAULT_REPEAT_DELAY, SDL_DEFAULT_REPEAT_INTERVAL);
data->pixel = malloc(width * height * 3);
memset(data->pixel, 0, width * height * 3);
memset(data->screen->pixels, 0, width * height * 4);
sdl->base.width = width;
sdl->base.height = height;
sdl->data = data;
sdl->base.pixel = data->pixel;
data->okay = 1;
data->stop = 0;
*p = &(sdl->base);
return 1;
}

13
sdl.h 100644
Wyświetl plik

@ -0,0 +1,13 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef SDL_H
#define SDL_H
#include "img.h"
int open_sdl_write(struct img **, char *, int, int);
#endif

Wyświetl plik

@ -1,23 +0,0 @@
pluginManagement {
repositories {
google {
content {
includeGroupByRegex("com\\.android.*")
includeGroupByRegex("com\\.google.*")
includeGroupByRegex("androidx.*")
}
}
mavenCentral()
gradlePluginPortal()
}
}
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
google()
mavenCentral()
}
}
rootProject.name = "Robot36"
include ':app'

BIN
smpte.ppm 100644

Plik binarny nie jest wyświetlany.

43
utils.h 100644
Wyświetl plik

@ -0,0 +1,43 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef UTILS_H
#define UTILS_H
#include <stdint.h>
#include <time.h>
int64_t gcd(int64_t a, int64_t b)
{
int64_t c;
while ((c = a % b)) {
a = b;
b = c;
}
return b;
}
float fclampf(float x, float min, float max)
{
float tmp = x < min ? min : x;
return tmp > max ? max : tmp;
}
float flerpf(float a, float b, float x)
{
return a * (1.0 - x) + b * x;
}
char *string_time(char *fmt)
{
static char s[64];
time_t now = time(0);
strftime(s, sizeof(s), fmt, localtime(&now));
return s;
}
#endif

163
wav.c 100644
Wyświetl plik

@ -0,0 +1,163 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <stdlib.h>
#include "wav.h"
#include "mmap_file.h"
struct wav_head {
uint32_t ChunkID;
uint32_t ChunkSize;
uint32_t Format;
uint32_t Subchunk1ID;
uint32_t Subchunk1Size;
uint16_t AudioFormat;
uint16_t NumChannels;
uint32_t SampleRate;
uint32_t ByteRate;
uint16_t BlockAlign;
uint16_t BitsPerSample;
uint32_t Subchunk2ID;
uint32_t Subchunk2Size;
};
struct wav {
struct pcm base;
void *p;
short *b;
size_t size;
int index;
int frames;
int r;
int c;
};
void close_wav(struct pcm *pcm)
{
struct wav *wav = (struct wav *)(pcm->data);
munmap_file(wav->p, wav->size);
free(wav);
}
void info_wav(struct pcm *pcm)
{
struct wav *wav = (struct wav *)(pcm->data);
fprintf(stderr, "%d channel(s), %d rate, %.2f seconds\n", wav->c, wav->r, (float)wav->frames / (float)wav->r);
}
int rate_wav(struct pcm *pcm)
{
struct wav *wav = (struct wav *)(pcm->data);
return wav->r;
}
int channels_wav(struct pcm *pcm)
{
struct wav *wav = (struct wav *)(pcm->data);
return wav->c;
}
int read_wav(struct pcm *pcm, short *buff, int frames)
{
struct wav *wav = (struct wav *)(pcm->data);
if ((wav->index + frames) > wav->frames)
return 0;
memcpy(buff, wav->b + wav->index * wav->c, sizeof(short) * frames * wav->c);
wav->index += frames;
return 1;
}
int write_wav(struct pcm *pcm, short *buff, int frames)
{
struct wav *wav = (struct wav *)(pcm->data);
if ((wav->index + frames) > wav->frames)
return 0;
memcpy(wav->b + wav->index * wav->c, buff, sizeof(short) * frames * wav->c);
wav->index += frames;
return 1;
}
int open_wav_read(struct pcm **p, char *name)
{
struct wav *wav = (struct wav *)malloc(sizeof(struct wav));
wav->base.close = close_wav;
wav->base.info = info_wav;
wav->base.rate = rate_wav;
wav->base.channels = channels_wav;
wav->base.rw = read_wav;
wav->base.data = (void *)wav;
if (!mmap_file_ro(&wav->p, name, &wav->size)) {
fprintf(stderr, "couldnt open wav file %s!\n", name);
free(wav);
return 0;
}
struct wav_head *head = (struct wav_head *)wav->p;
wav->b = (short *)(wav->p + sizeof(struct wav_head));
if (head->ChunkID != 0x46464952 || head->Format != 0x45564157 ||
head->Subchunk1ID != 0x20746d66 || head->Subchunk1Size != 16 ||
head->AudioFormat != 1 || head->Subchunk2ID != 0x61746164) {
fprintf(stderr, "unsupported WAV file!\n");
munmap_file(wav->p, wav->size);
free(wav);
return 0;
}
if (head->BitsPerSample != 16) {
fprintf(stderr, "only 16bit WAV supported!\n");
munmap_file(wav->p, wav->size);
free(wav);
return 0;
}
wav->index = 0;
wav->frames = head->Subchunk2Size / (sizeof(short) * head->NumChannels);
wav->r = head->SampleRate;
wav->c = head->NumChannels;
*p = &(wav->base);
return 1;
}
int open_wav_write(struct pcm **p, char *name, int rate, int channels, float seconds)
{
struct wav *wav = (struct wav *)malloc(sizeof(struct wav));
wav->base.close = close_wav;
wav->base.info = info_wav;
wav->base.rate = rate_wav;
wav->base.channels = channels_wav;
wav->base.rw = write_wav;
wav->base.data = (void *)wav;
int frames = seconds * rate;
wav->size = frames * channels * sizeof(short) + sizeof(struct wav_head);
if (!mmap_file_rw(&wav->p, name, wav->size)) {
fprintf(stderr, "couldnt open wav file %s!\n", name);
free(wav);
return 0;
}
struct wav_head *head = (struct wav_head *)wav->p;
wav->b = (short *)(wav->p + sizeof(struct wav_head));
head->ChunkID = 0x46464952;
head->ChunkSize = 36 + frames * sizeof(short) * channels;
head->Format = 0x45564157;
head->Subchunk1ID = 0x20746d66;
head->Subchunk1Size = 16;
head->AudioFormat = 1;
head->NumChannels = channels;
head->SampleRate = rate;
head->ByteRate = 2 * rate;
head->BlockAlign = 2;
head->BitsPerSample = 16;
head->Subchunk2ID = 0x61746164;
head->Subchunk2Size = frames * sizeof(short) * channels;
wav->r = rate;
wav->c = channels;
wav->frames = frames;
wav->index = 0;
*p = &(wav->base);
return 1;
}

14
wav.h 100644
Wyświetl plik

@ -0,0 +1,14 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef WAV_H
#define WAV_H
#include "pcm.h"
int open_wav_read(struct pcm **, char *);
int open_wav_write(struct pcm **, char *, int, int, float);
#endif

54
window.c 100644
Wyświetl plik

@ -0,0 +1,54 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include <math.h>
#include "window.h"
float sinc(float x)
{
return 0 == x ? 1.0 : sinf(M_PI * x) / (M_PI * x);
}
float hann(float n, float N, float a)
{
(void)a;
return 0.5 * (1.0 - cosf(2.0 * M_PI * n / (N - 1.0)));
}
float hamming(float n, float N, float a)
{
(void)a;
return 0.54 - 0.46 * cosf(2.0 * M_PI * n / (N - 1.0));
}
float lanczos(float n, float N, float a)
{
(void)a;
return sinc(2.0 * n / (N - 1.0) - 1.0);
}
float gauss(float n, float N, float o)
{
return expf(- 1.0/2.0 * powf((n - (N - 1.0) / 2.0) / (o * (N - 1.0) / 2.0), 2.0));
}
float i0f(float x)
{
// converges for -3*M_PI:3*M_PI in less than 20 iterations
float sum = 1.0, val = 1.0, c = 0.0;
for (int n = 1; n < 20; n++) {
float tmp = x / (2 * n);
val *= tmp * tmp;
float y = val - c;
float t = sum + y;
c = (t - sum) - y;
sum = t;
}
return sum;
}
float kaiser(float n, float N, float a)
{
return i0f(M_PI * a * sqrtf(1.0 - powf((2.0 * n) / (N - 1.0) - 1.0, 2.0))) / i0f(M_PI * a);
}

19
window.h 100644
Wyświetl plik

@ -0,0 +1,19 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef WINDOW_H
#define WINDOW_H
float sinc(float);
float hann(float, float, float);
float hamming(float, float, float);
float lanczos(float, float, float);
float gauss(float, float, float);
float kaiser(float, float, float);
#endif

67
yuv.c 100644
Wyświetl plik

@ -0,0 +1,67 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#include "yuv.h"
#include <math.h>
uint8_t yuv_clamp(float x)
{
float tmp = x < 0.0 ? 0.0 : x;
return tmp > 255.0 ? 255.0 : tmp;
}
uint8_t srgb(float linear)
{
float v = fminf(fmaxf(linear, 0.0f), 1.0f);
float K0 = 0.04045f;
float a = 0.055f;
float phi = 12.92f;
float gamma = 2.4f;
float srgb = v <= K0 / phi ? v * phi : (1.0f + a) * powf(v, 1.0f / gamma) - a;
return 255.0f * srgb;
}
float linear(uint8_t srgb)
{
float v = srgb / 255.0f;
float K0 = 0.04045f;
float a = 0.055f;
float phi = 12.92f;
float gamma = 2.4f;
float linear = v <= K0 ? v / phi : powf((v + a) / (1.0f + a), gamma);
return linear;
}
uint8_t R_YUV(uint8_t Y, uint8_t U, uint8_t V)
{
(void)U;
return yuv_clamp(0.003906 * ((298.082 * (Y - 16.0)) + (408.583 * (V - 128))));
}
uint8_t G_YUV(uint8_t Y, uint8_t U, uint8_t V)
{
return yuv_clamp(0.003906 * ((298.082 * (Y - 16.0)) + (-100.291 * (U - 128)) + (-208.12 * (V - 128))));
}
uint8_t B_YUV(uint8_t Y, uint8_t U, uint8_t V)
{
(void)V;
return yuv_clamp(0.003906 * ((298.082 * (Y - 16.0)) + (516.411 * (U - 128))));
}
uint8_t Y_RGB(uint8_t R, uint8_t G, uint8_t B)
{
return yuv_clamp(16.0 + (0.003906 * ((65.738 * R) + (129.057 * G) + (25.064 * B))));
}
uint8_t V_RGB(uint8_t R, uint8_t G, uint8_t B)
{
return yuv_clamp(128.0 + (0.003906 * ((112.439 * R) + (-94.154 * G) + (-18.285 * B))));
}
uint8_t U_RGB(uint8_t R, uint8_t G, uint8_t B)
{
return yuv_clamp(128.0 + (0.003906 * ((-37.945 * R) + (-74.494 * G) + (112.439 * B))));
}

21
yuv.h 100644
Wyświetl plik

@ -0,0 +1,21 @@
/*
robot36 - encode and decode images using SSTV in Robot 36 mode
Written in 2011 by <Ahmet Inan> <xdsopl@googlemail.com>
To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to this software to the public domain worldwide. This software is distributed without any warranty.
You should have received a copy of the CC0 Public Domain Dedication along with this software. If not, see <http://creativecommons.org/publicdomain/zero/1.0/>.
*/
#ifndef YUV_H
#define YUV_H
#include <stdint.h>
uint8_t srgb(float linear);
float linear(uint8_t srgb);
uint8_t R_YUV(uint8_t, uint8_t, uint8_t);
uint8_t G_YUV(uint8_t, uint8_t, uint8_t);
uint8_t B_YUV(uint8_t, uint8_t, uint8_t);
uint8_t Y_RGB(uint8_t, uint8_t, uint8_t);
uint8_t V_RGB(uint8_t, uint8_t, uint8_t);
uint8_t U_RGB(uint8_t, uint8_t, uint8_t);
#endif