Why Can't I Use 'Type' as the Name of an Enum Embedded in a Struct

Why can't I use 'Type' as the name of an enum embedded in a struct?

Steve was right, it's a keyword. Here's the relevant part of the spec:

Keywords reserved in particular contexts: associativity, convenience,
dynamic, didSet, final, get, infix, inout, lazy, left, mutating, none,
nonmutating, optional, override, postfix, precedence, prefix,
Protocol, required, right, set, Type, unowned, weak, and willSet.
Outside the context in which they appear in the grammar, they can be
used as identifiers.

Apparently, a top level enum Type is fine, but one embedded in a struct is not. The language reference section on Types > Metatype Types explains why:

Metatype Type

A metatype type refers to the type of any type, including class types, structure types, enumeration types, and protocol types.

The metatype of a class, structure, or enumeration type is the name of
that type followed by .Type. The metatype of a protocol type—not the
concrete type that conforms to the protocol at runtime—is the name of
that protocol followed by .Protocol. For example, the metatype of the
class type SomeClass is SomeClass.Type and the metatype of the
protocol SomeProtocol is SomeProtocol.Protocol.

c using enum inside struct

This is simply not possible. The C language does not support it. C requires you to do all type declarations in global space. Or to be more precise: The C standard does not require type declarations to work in non-global space.

So you simply have to choose different name for them. You can emulate the namespace feature in C++ to by naming the types in a certain way to accomplish basically the same thing. Something like this:

typedef enum {
...
} foo_name;

typedef struct {
foo_name name;
} foo;

typedef enum {
...
} bar_name;

typedef struct {
bar_name name;
} bar;

typedef struct {
bar_name name;
foo_name name;
} foobar;

Why put an enum in a struct and then use a typedef name?

The perceived problem with plain old enums is that the enumerators become names in the scope where the enum is defined. As a result, you could say, for example,

FooType m_type;
m_type = TypeA;

If you had also defined a class names TypeA you'd have a conflict. Putting the enum inside a class means you have to use a scope qualifier to get at the name, which would remove the conflict. It also means you'd have to write

FooType m_type;
m_type = FooType::TypeA;

because the previous version wouldn't be valid.

A newer solution to this problem is scoped enums:

enum class FooType {
TypeA,
TypeB,
Type_MAX
};

Now you can say

FooType m_type;
m_type = FooType::TypeA;

but not

m_type = TypeA;

One difference here, as @Jarod42 points out, is that an enumerator defined by a plain enum can be implicitly converted to int, while an enumerator from a scoped enum is not. So with the definition inside a class,

int i = FooType::TypeA;

is valid, and i gets the value 0. With a scoped enum it is not valid.

In both cases, a cast makes the conversion okay:

int i = static_cast<int>(FooType::TypeA);

The use of enum without a variable name

you can define such enum in a class, which gives it a scope, and helps expose class's capabilities, e.g.

class Encryption {
public:
enum { DEFAUTL_KEYLEN=16, SHORT_KEYLEN=8, LONG_KEYLEN=32 };
// ...
};

byte key[Encryption::DEFAUTL_KEYLEN];

Swift enum values are not accessible

The problem is that Type is a Swift keyword and your custom Type confuses the compiler.

In my tests in a Playground your code generated the same error. The solution is to change Type for any other name. Example with Kind:

public enum Kind: Int {
case ConnectionError
case ServerError
}

init(type: Kind) {
super.init(domain: "domain", code: type.rawValue, userInfo: [:])
}

Then

MyError.Kind.ConnectionError.rawValue

works as expected.

Is it possible to have a struct where the field is an item from an enum? Rust

If I understand your question right, you should implement this with traits, by creating a trait that's implemented on both String and u8.

trait AcceptableDataType {}

impl AcceptableDataType for String {}

impl AcceptableDataType for u8 {}

struct Data<K, V>
where
K: AcceptableDataType,
V: AcceptableDataType,
{
map: HashMap<K, V>,
}

Do note that in Data, you can't just "check" which type it is, you only know that it's an AcceptableDataType, which doesn't give you any info.

If you need to get back an enum of both, I'd include a method in the trait to do so, as such:

enum AcceptableDataTypeEnum {
String(String),
U8(u8),
}

trait AcceptableDataType {
fn into_enum(self) -> AcceptableDataTypeEnum;
}

and also implement the function in the impls.

If you don't want any downstream users to be able to add new items to the trait, you can also make it sealed.

If you have a complicated set of impls, possibly involving generics that overlap, but you just want to use the trait as a "marker", then you can also use the unstable night-only feature marker-trait-attr.

Note however, that this prevents you from having methods on the trait, so you can't use the approach above to get back an enum, you'd need specialization for that, which is an incomplete feature.

Specifying size of enum type in C

Is there a way to tell your compiler
specifically how wide you want your
enum to be?

In general case no. Not in standard C.

Would it even be worth doing?

It depends on the context. If you are talking about passing parameters to functions, then no, it is not worth doing (see below). If it is about saving memory when building aggregates from enum types, then it might be worth doing. However, in C you can simply use a suitably-sized integer type instead of enum type in aggregates. In C (as opposed to C++) enum types and integer types are almost always interchangeable.

When the enum value is passed to a function, will it be passed as an int-sized value regardless?

Many (most) compilers these days pass all parameters as values of natural word size for the given hardware platform. For example, on a 64-bit platform many compilers will pass all parameters as 64-bit values, regardless of their actual size, even if type int has 32 bits in it on that platform (so, it is not generally passed as "int-sized" value on such a platform). For this reason, it makes no sense to try to optimize enum sizes for parameter passing purposes.



Related Topics



Leave a reply



Submit