31 Commits

Author SHA1 Message Date
Evgeniy A. Dushistov
eeee360fb0 t_json: add data about new dictionary 2022-06-24 21:33:33 +03:00
Evgeniy A. Dushistov
f69973e1fa fix bash syntax error 2022-06-24 21:26:10 +03:00
Evgeniy A. Dushistov
931fc98478 check file size before mapping on linux 2022-06-24 21:25:55 +03:00
Evgeniy A. Dushistov
6f30be7815 clang-format for mapfile 2022-06-24 21:24:03 +03:00
Evgeniy A. Dushistov
1a926d1b69 version 0.5.4 2022-06-24 20:57:57 +03:00
Evgeniy A. Dushistov
e89cfa18b1 Revert "replace deprecated g_pattern_match_string function"
This reverts commit 452a4e07fb.
2022-06-24 20:57:57 +03:00
Evgeniy A. Dushistov
12d9ea5b97 more robust parsing of ifo file
fixes #79 fixes #81
2022-06-24 20:54:30 +03:00
Evgeniy A. Dushistov
920c2bafb9 stardict_lib.hpp: remove unused headers plus clang-format 2022-06-24 20:53:53 +03:00
Evgeniy A. Dushistov
5d2332b0cb use cmake to check if compiler supports c++11 2022-06-24 20:10:43 +03:00
Evgeniy A. Dushistov
452a4e07fb replace deprecated g_pattern_match_string function 2022-06-24 20:06:54 +03:00
Evgeniy A. Dushistov
59ef936288 clang-format for stardict_lib.cpp 2022-06-24 20:03:45 +03:00
Aleksa Sarai
d054adb37c tests: add multiple results integration test
Make sure we return all of the relevant results, even in cases with
lots of results (larger than ENTR_PER_PAGE in the offset index) and
where you have a synyonym and headword present for the same word.

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
2021-11-14 22:38:26 +03:00
Aleksa Sarai
4a9b1dae3d stardict_lib: remove dead poGet{Current,Next,Pre}Word iterators
They aren't used at all by scdv, and thus aren't tested (meaning that
adaptions to the core lookup algorithms can be complicated because these
methods use them but aren't tested so there's no real way of knowing if
a change has broken the methods or not).

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
2021-11-14 22:38:26 +03:00
Aleksa Sarai
6d385221d0 lookup: return all matching entries found during lookup
Previously, we would just return the first entry we found that matched
the requested word. This causes issues with dictionaries that have lots
of entries which can be found using the same search string. In these
cases, the user got a completely arbitrary word returned to them rather
than the full set.

While this may seem strange, this is incredibly commonplace in Japanese
and likely several other languages. In Japanese:

 * When written using kanji, the same string of characters could refer
   to more than one word which may have a completely different meaning.
   Examples include 潜る (くぐる、もぐる) and 辛い (からい、つらい).

 * When written in kana, the same string of characters can also refer to
   more than one word which is written using completely different kanji,
   and has a completely different meaning. Examples include きく
   (聞く、効く、菊) and たつ (立つ、建つ、絶つ).

In both cases, these are different words in every sense of the word, and
have separate headwords for each in the dictionary. Thus in order to be
completely useful for such dictionaries, sdcv needs to be able to return
every matching word in the dictionary.

The solution is conceptually simple -- return a set containing the
indices rather than just a single index. Since every list we search is
sorted (to allow binary searching), once we find one match we can just
walk backwards and forwards from the match point to find the entire
block of matching terms and add them to the set in linear time. A
std::set is used so that we don't return duplicate results needlessly.

This solution was in practice a bit more complicated because .otf cache
files require a bit more fiddling, and also the ->lookup methods are
used by some callers to find the next entry if no entry was found. But
on the whole it's not too drastic of a change from the previous setup.

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
2021-11-14 22:38:26 +03:00
Evgeniy Dushistov
3d15ce3b07 Merge pull request #77 from cyphar/multi-word-lookups
lookup: do not bail on first failed lookup with a word list
2021-10-17 21:03:14 +03:00
Aleksa Sarai
51338ac5bb lookup: do not bail on first failed lookup with a word list
Due to the lack of deinflection support in StarDict, users might want to
be able to create a list of possible deinflections and search each one
to see if there is a dictionary entry for that deinflection.

Being able to do this in one sdcv invocation is far more preferable to
calling sdcv once for each candidate due to the performance cost of
doing so. The most obvious language that would benefit from this is
Japanese, but I'm sure other folks would prefer this.

In order to make this use-case better supported -- try to look up every
word in the provided list of words before existing with an error if any
one of the words failed to be looked up.

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
2021-09-29 03:28:44 +10:00
Evgeniy Dushistov
5ada75e08d Merge pull request #73 from 258204/json
Added --json (same as --json-output) to match man
2021-06-21 12:45:09 +03:00
258204
c7d9944f7d Added --json (same as --json-output) to match man 2021-06-19 19:19:31 -06:00
Evgeniy Dushistov
3963e358cd Merge pull request #68 from NiLuJe/glib-getopt
Handle "rest" arguments the glib way
2021-01-27 16:33:36 +03:00
NiLuJe
3b26731b02 Making glib thinks it's a filename instead of a string prevents the
initial UTF-8 conversion

At least on POSIX.

Windows is another kettle of fish. But then it was probably already
broken there.
2021-01-14 19:26:06 +01:00
NiLuJe
070a9fb0bd Oh, well, dirty hackery it is, then.
the previous approachonly works as long as locales are actually sane
(i.e., the test only passes if you *actually* have the ru_RU.KOI8-R
locale built, which the CI doesn't).
2021-01-12 04:37:07 +01:00
NiLuJe
8f096629ec Unbreak tests
glib already runs the argument through g_locale_to_utf8 with
G_OPTION_REMAINING
2021-01-12 04:16:03 +01:00
NiLuJe
25768c6b80 Handle "rest" arguments the glib way
Ensures the "stop parsing" token (--) is handled properly.
2021-01-12 03:35:55 +01:00
Evgeniy Dushistov
4ae4207349 Merge pull request #67 from doozan/master
Use binary search for synonyms, fixes #31
2020-12-23 04:30:13 +03:00
Jeff Doozan
994c1c7ae6 Use mapfile directly instead of buffer 2020-12-21 17:10:37 -05:00
Jeff Doozan
d38f8f13c9 Synonyms: Use MapFile 2020-12-21 08:53:29 -05:00
Jeff Doozan
cc7bcb8b73 Fix crash if dictionary has no synonyms 2020-12-19 18:37:15 -05:00
Jeff Doozan
8e9f72ae57 Synonyms lookup: return correct offset 2020-12-19 18:01:21 -05:00
Jeff Doozan
88af1a077c Use binary search for synonyms, fixes #31 2020-12-19 15:10:39 -05:00
Evgeniy Dushistov
b66799f358 Merge pull request #66 from Dushistov/fix-ci
fix ci: github changed API for path/env
2020-12-10 00:42:34 +03:00
Evgeniy A. Dushistov
be5c3a35bf fix ci: github changed API for path/env 2020-12-10 00:40:14 +03:00
20 changed files with 453 additions and 388 deletions

View File

@@ -15,7 +15,7 @@ BreakBeforeBinaryOperators: true
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: true
BinPackParameters: true
ColumnLimit: 0
ColumnLimit: 120
ConstructorInitializerAllOnOneLineOrOnePerLine: false
DerivePointerAlignment: false
ExperimentalAutoDetectBinPacking: false

View File

@@ -25,7 +25,7 @@ jobs:
- uses: actions/checkout@v2
with:
submodules: 'recursive'
- uses: jwlawson/actions-setup-cmake@v1.0
- uses: jwlawson/actions-setup-cmake@v1.4
with:
cmake-version: '3.5.1'
github-api-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -3,6 +3,10 @@ project(sdcv)
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
cmake_policy(VERSION 3.5)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED True)
set(CMAKE_CXX_EXTENSIONS False)
include("${CMAKE_CURRENT_SOURCE_DIR}/cmake/compiler.cmake")
set(ZLIB_FIND_REQUIRED True)
@@ -91,7 +95,7 @@ set(CPACK_PACKAGE_VENDOR "Evgeniy Dushistov <dushistov@mail.ru>")
set(CPACK_PACKAGE_DESCRIPTION_FILE "${CMAKE_CURRENT_SOURCE_DIR}/README.org")
set(CPACK_PACKAGE_VERSION_MAJOR "0")
set(CPACK_PACKAGE_VERSION_MINOR "5")
set(CPACK_PACKAGE_VERSION_PATCH "3")
set(CPACK_PACKAGE_VERSION_PATCH "4")
set(sdcv_VERSION
"${CPACK_PACKAGE_VERSION_MAJOR}.${CPACK_PACKAGE_VERSION_MINOR}.${CPACK_PACKAGE_VERSION_PATCH}")
@@ -143,5 +147,7 @@ if (BUILD_TESTS)
add_sdcv_shell_test(t_utf8input)
add_sdcv_shell_test(t_datadir)
add_sdcv_shell_test(t_return_code)
add_sdcv_shell_test(t_multiple_results)
add_sdcv_shell_test(t_newlines_in_ifo)
endif (BUILD_TESTS)

View File

@@ -16,19 +16,6 @@ if (NOT DEFINED SDCV_COMPILER_IS_GCC_COMPATIBLE)
endif()
endif()
if (MSVC AND (MSVC_VERSION LESS 1900))
message(FATAL_ERROR "MSVC version ${MSVC_VERSION} have no full c++11 support")
elseif (MSVC)
add_definitions(-DNOMINMAX)
elseif (NOT MSVC)
check_cxx_compiler_flag("-std=c++11" CXX_SUPPORTS_CXX11)
if (CXX_SUPPORTS_CXX11)
append("-std=c++11" CMAKE_CXX_FLAGS)
else ()
message(FATAL_ERROR "sdcv requires C++11 support but the '-std=c++11' flag isn't supported.")
endif()
endif ()
if (SDCV_COMPILER_IS_GCC_COMPATIBLE)
append("-Wall" "-Wextra" "-Wformat-security" "-Wcast-align" "-Werror=format" "-Wcast-qual" CMAKE_C_FLAGS)
append("-Wall" "-pedantic" "-Wextra" "-Wformat-security" "-Wcast-align" "-Werror=format" "-Wcast-qual" CMAKE_CXX_FLAGS)

View File

@@ -199,14 +199,18 @@ static std::string parse_data(const gchar *data, bool colorize_output)
void Library::SimpleLookup(const std::string &str, TSearchResultList &res_list)
{
glong ind;
std::set<glong> wordIdxs;
res_list.reserve(ndicts());
for (gint idict = 0; idict < ndicts(); ++idict)
if (SimpleLookupWord(str.c_str(), ind, idict))
res_list.push_back(
TSearchResult(dict_name(idict),
poGetWord(ind, idict),
parse_data(poGetWordData(ind, idict), colorize_output_)));
for (gint idict = 0; idict < ndicts(); ++idict) {
wordIdxs.clear();
if (SimpleLookupWord(str.c_str(), wordIdxs, idict))
for (auto &wordIdx : wordIdxs)
res_list.push_back(
TSearchResult(dict_name(idict),
poGetWord(wordIdx, idict),
parse_data(poGetWordData(wordIdx, idict),
colorize_output_)));
}
}
void Library::LookupWithFuzzy(const std::string &str, TSearchResultList &res_list)

View File

@@ -7,6 +7,7 @@
#ifdef HAVE_MMAP
#include <fcntl.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>
#endif
#ifdef _WIN32
@@ -40,20 +41,25 @@ inline bool MapFile::open(const char *file_name, unsigned long file_size)
size = file_size;
#ifdef HAVE_MMAP
if ((mmap_fd = ::open(file_name, O_RDONLY)) < 0) {
//g_print("Open file %s failed!\n",fullfilename);
// g_print("Open file %s failed!\n",fullfilename);
return false;
}
struct stat st;
if (fstat(mmap_fd, &st) == -1 || st.st_size < 0 || (st.st_size == 0 && S_ISREG(st.st_mode))
|| sizeof(st.st_size) > sizeof(file_size) || static_cast<unsigned long>(st.st_size) != file_size) {
close(mmap_fd);
return false;
}
data = (gchar *)mmap(nullptr, file_size, PROT_READ, MAP_SHARED, mmap_fd, 0);
if ((void *)data == (void *)(-1)) {
//g_print("mmap file %s failed!\n",idxfilename);
// g_print("mmap file %s failed!\n",idxfilename);
data = nullptr;
return false;
}
#elif defined(_WIN32)
hFile = CreateFile(file_name, GENERIC_READ, 0, nullptr, OPEN_ALWAYS,
FILE_ATTRIBUTE_NORMAL, 0);
hFileMap = CreateFileMapping(hFile, nullptr, PAGE_READONLY, 0,
file_size, nullptr);
hFile = CreateFile(file_name, GENERIC_READ, 0, nullptr, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0);
hFileMap = CreateFileMapping(hFile, nullptr, PAGE_READONLY, 0, file_size, nullptr);
data = (gchar *)MapViewOfFile(hFileMap, FILE_MAP_READ, 0, 0, file_size);
#else
gsize read_len;

View File

@@ -83,6 +83,7 @@ try {
glib::CharStr opt_data_dir;
gboolean only_data_dir = FALSE;
gboolean colorize = FALSE;
glib::StrArr word_list;
const GOptionEntry entries[] = {
{ "version", 'v', 0, G_OPTION_ARG_NONE, &show_version,
@@ -96,6 +97,8 @@ try {
_("for use in scripts"), nullptr },
{ "json-output", 'j', 0, G_OPTION_ARG_NONE, &json_output,
_("print the result formatted as JSON"), nullptr },
{ "json", 'j', 0, G_OPTION_ARG_NONE, &json_output,
_("print the result formatted as JSON"), nullptr },
{ "exact-search", 'e', 0, G_OPTION_ARG_NONE, &no_fuzzy,
_("do not fuzzy-search for similar words, only return exact matches"), nullptr },
{ "utf8-output", '0', 0, G_OPTION_ARG_NONE, &utf8_output,
@@ -109,11 +112,13 @@ try {
_("only use the dictionaries in data-dir, do not search in user and system directories"), nullptr },
{ "color", 'c', 0, G_OPTION_ARG_NONE, &colorize,
_("colorize the output"), nullptr },
{ G_OPTION_REMAINING, 0, 0, G_OPTION_ARG_FILENAME_ARRAY, get_addr(word_list),
_("search terms"), _(" words") },
{},
};
glib::Error error;
GOptionContext *context = g_option_context_new(_(" words"));
GOptionContext *context = g_option_context_new(nullptr);
g_option_context_set_help_enabled(context, TRUE);
g_option_context_add_main_entries(context, entries, nullptr);
const gboolean parse_res = g_option_context_parse(context, &argc, &argv, get_addr(error));
@@ -210,14 +215,19 @@ try {
lib.load(dicts_dir_list, order_list, disable_list);
std::unique_ptr<IReadLine> io(create_readline_object());
if (optind < argc) {
if (word_list != nullptr) {
search_result rval = SEARCH_SUCCESS;
for (int i = optind; i < argc; ++i)
if ((rval = lib.process_phrase(argv[i], *io, non_interactive)) != SEARCH_SUCCESS) {
return rval;
}
gchar **p = get_impl(word_list);
while (*p) {
search_result this_rval = lib.process_phrase(*p++, *io, non_interactive);
// If we encounter any error, save it but continue through the word
// list to check all requested words.
if (rval == SEARCH_SUCCESS)
rval = this_rval;
}
if (rval != SEARCH_SUCCESS)
return rval;
} else if (!non_interactive) {
std::string phrase;
while (io->read(_("Enter word or phrase: "), phrase)) {
if (lib.process_phrase(phrase.c_str(), *io) == SEARCH_FAILURE)

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,10 @@
#pragma once
#include <cstdio>
#include <cstring>
#include <functional>
#include <list>
#include <map>
#include <memory>
#include <set>
#include <string>
#include <vector>
@@ -29,7 +28,7 @@ inline void set_uint32(gchar *addr, guint32 val)
struct cacheItem {
guint32 offset;
gchar *data;
//write code here to make it inline
// write code here to make it inline
cacheItem() { data = nullptr; }
~cacheItem() { g_free(data); }
};
@@ -67,7 +66,7 @@ private:
gint cache_cur = 0;
};
//this structure contain all information about dictionary
// this structure contain all information about dictionary
struct DictInfo {
std::string ifo_file_name;
guint32 wordcount;
@@ -96,17 +95,27 @@ public:
virtual const gchar *get_key(glong idx) = 0;
virtual void get_data(glong idx) = 0;
virtual const gchar *get_key_and_data(glong idx) = 0;
virtual bool lookup(const char *str, glong &idx) = 0;
virtual bool lookup(const char *str, std::set<glong> &idxs, glong &next_idx) = 0;
virtual bool lookup(const char *str, std::set<glong> &idxs)
{
glong unused_next_idx;
return lookup(str, idxs, unused_next_idx);
};
};
class SynFile
{
public:
SynFile() {}
~SynFile() {}
bool load(const std::string &url, gulong wc);
bool lookup(const char *str, glong &idx);
bool lookup(const char *str, std::set<glong> &idxs, glong &next_idx);
bool lookup(const char *str, std::set<glong> &idxs);
const gchar *get_key(glong idx) { return synlist[idx]; }
private:
std::map<std::string, gulong> synonyms;
MapFile synfile;
std::vector<gchar *> synlist;
};
class Dict : public DictBase
@@ -133,7 +142,12 @@ public:
*offset = idx_file->wordentry_offset;
*size = idx_file->wordentry_size;
}
bool Lookup(const char *str, glong &idx);
bool Lookup(const char *str, std::set<glong> &idxs, glong &next_idx);
bool Lookup(const char *str, std::set<glong> &idxs)
{
glong unused_next_idx;
return Lookup(str, idxs, unused_next_idx);
}
bool LookupWithRule(GPatternSpec *pspec, glong *aIndex, int iBuffLen);
@@ -155,7 +169,7 @@ public:
Libs(std::function<void(void)> f = std::function<void(void)>())
{
progress_func = f;
iMaxFuzzyDistance = MAX_FUZZY_DISTANCE; //need to read from cfg.
iMaxFuzzyDistance = MAX_FUZZY_DISTANCE; // need to read from cfg.
}
void setVerbose(bool verbose) { verbose_ = verbose; }
void setFuzzy(bool fuzzy) { fuzzy_ = fuzzy; }
@@ -181,15 +195,12 @@ public:
return nullptr;
return oLib[iLib]->get_data(iIndex);
}
const gchar *poGetCurrentWord(glong *iCurrent);
const gchar *poGetNextWord(const gchar *word, glong *iCurrent);
const gchar *poGetPreWord(glong *iCurrent);
bool LookupWord(const gchar *sWord, glong &iWordIndex, int iLib)
bool LookupWord(const gchar *sWord, std::set<glong> &iWordIndices, int iLib)
{
return oLib[iLib]->Lookup(sWord, iWordIndex);
return oLib[iLib]->Lookup(sWord, iWordIndices);
}
bool LookupSimilarWord(const gchar *sWord, glong &iWordIndex, int iLib);
bool SimpleLookupWord(const gchar *sWord, glong &iWordIndex, int iLib);
bool LookupSimilarWord(const gchar *sWord, std::set<glong> &iWordIndices, int iLib);
bool SimpleLookupWord(const gchar *sWord, std::set<glong> &iWordIndices, int iLib);
bool LookupWithFuzzy(const gchar *sWord, gchar *reslist[], gint reslist_size);
gint LookupWithRule(const gchar *sWord, gchar *reslist[]);

View File

@@ -0,0 +1,9 @@
StarDict's dict ifo file
version=3.0.0
bookname=Russian-English Dictionary (ru-en)
wordcount=415144
idxfilesize=12344255
sametypesequence=h
synwordcount=1277580
author=Vuizur
description=

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,7 @@
StarDict's dict ifo file
version=3.0.0
bookname=Test multiple results
wordcount=246
idxfilesize=5977
synwordcount=124
description=

Binary file not shown.

View File

@@ -18,8 +18,15 @@ test_json() {
fi
}
test_json '[{"name": "Test synonyms", "wordcount": "2"},{"name": "Sample 1 test dictionary", "wordcount": "1"},{"name": "test_dict", "wordcount": "1"}]' -x -j -l -n --data-dir "$TEST_DIR"
test_json '[{"name": "Russian-English Dictionary (ru-en)", "wordcount": "415144"},
{"name": "Test synonyms", "wordcount": "2"},
{"name": "Test multiple results", "wordcount": "246"},
{"name": "Sample 1 test dictionary", "wordcount": "1"},
{"name": "test_dict", "wordcount": "1"}]' -x -j -l -n --data-dir "$TEST_DIR"
test_json '[{"dict": "Test synonyms","word":"test","definition":"\u000aresult of test"}]' -x -j -n --data-dir "$TEST_DIR" foo
test_json '[]' -x -j -n --data-dir "$TEST_DIR" foobarbaaz
# Test multiple searches, with the first failing.
test_json '[][{"dict": "Test synonyms","word":"test","definition":"\u000aresult of test"}]' -x -j -n --data-dir "$TEST_DIR" foobarbaaz foo
exit 0

67
tests/t_multiple_results Executable file
View File

@@ -0,0 +1,67 @@
#!/bin/sh
set -e
SDCV="$1"
TEST_DIR="$2"
unset SDCV_PAGER
unset STARDICT_DATA_DIR
test_json() {
word="$1"
jq_cmp="$2"
result="$("$SDCV" --data-dir "$TEST_DIR" -exjn "$word" | sed 's|\\n|\\u000a|g')"
cmp_result="$(echo "$result" | jq "$jq_cmp")"
if [ "$cmp_result" != "true" ]; then
echo "expected '$jq_cmp' to return true, but $result didn't"
exit 1
fi
}
# Basic two-result search for the same headword.
test_json bark \
'. == [
{"dict":"Test multiple results","word":"bark","definition":"\u000aThe harsh sound made by a dog."},
{"dict":"Test multiple results","word":"bark","definition":"\u000aThe tough outer covering of trees and other woody plants."}
]'
# Multi-result search where one word exists as both a synyonym and a separate
# headword. This ensures that if there is a matching synyonym we don't skip the
# regular search.
test_json cat \
'. == [
{"dict":"Test multiple results","word":"cat","definition":"\u000aA cute animal which (rarely) barks."},
{"dict":"Test multiple results","word":"lion","definition":"\u000aA larger cat which might bite your head off."},
{"dict":"Test multiple results","word":"panther","definition":"\u000aI know very little about panthers, sorry."}
]'
# Many-result search for a word that matches 120 distinct headwords.
test_json many_headwords 'length == 120'
test_json many_headwords 'all(.word == "many_headwords")'
test_json many_headwords \
'to_entries | map(.value.definition == "\u000aDefinition for [many_headwords] entry #\(.key+1) (same headword).") | all'
# Many-result search for 120 words that have the same synonym.
test_json many_synonyms 'length == 120'
test_json many_synonyms \
'to_entries | map(.value.word == "many_synonyms-\(.key+101)") | all'
test_json many_synonyms \
'to_entries | map(.value.definition == "\u000aDefinition for [many_synonyms-\(.key+101)] (same synonym).") | all'
# Ensure that we don't return more than one result even if a word can be
# resolved in more than one way.
#
# Most well-formed dictionaries don't have entries like this (it basically
# requires you to have a dictionary where there is a synonym that is identical
# to a word's headword or multiple identical synyonym entries).
#
# This entry was created by creating extra synonyms with different names then
# modifying the .syn file manually.
test_json many_resolution_paths \
'. == [
{"dict":"Test multiple results","word":"many_resolution_paths",
"definition":"\u000aDefinition for [many_resolution_paths] headword (same word, multiple synonym entries)."}
]'
exit 0

18
tests/t_newlines_in_ifo Executable file
View File

@@ -0,0 +1,18 @@
#!/bin/sh
set -e
PATH_TO_SDCV="$1"
TEST_DIR="$2"
unset SDCV_PAGER
unset STARDICT_DATA_DIR
RES=$("$PATH_TO_SDCV" -n -x --data-dir="$TEST_DIR/not-unix-newlines-ifo" -l | tail -n 1)
if [ "$RES" = "Russian-English Dictionary (ru-en) 415144" ]; then
exit 0
else
echo "test failed, unexpected result: $RES" >&2
exit 1
fi